Re: [pve-devel] [PATCH qemu] PVE-Backup: remove dirty-bitmap in pvebackup_complete_cb for failed jobs

2020-07-02 Thread Stefan Reiter

On 7/2/20 8:28 AM, Fabian Grünbichler wrote:

it should also be possible to keep the old bitmap (and associated backup
checksum) in this case? this is what bitmap-mode on-success is supposed
to do, but maybe errors are not triggering the right code paths?



The problem with on-success mode is that we have a backup_job per drive, 
so if one drive fails and one succeeds, one bitmap will be applied and 
the other wont, while PBS marks the whole backup as failed.


Though it is true that in case of an abort (not an error though) we 
could keep the last bitmap intact in this patch. Does 'ret' 
differentiate between abort and error?



On July 1, 2020 2:17 pm, Dietmar Maurer wrote:

Note: We remove the device from di_list, so pvebackup_co_cleanup does
not handle this case.
---
  pve-backup.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/pve-backup.c b/pve-backup.c
index 61a8b4d2a4..1c4f6cf9e0 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -318,6 +318,12 @@ static void pvebackup_complete_cb(void *opaque, int ret)
  // remove self from job queue
  backup_state.di_list = g_list_remove(backup_state.di_list, di);
  
+if (di->bitmap && ret < 0) {

+// on error or cancel we cannot ensure synchronization of dirty
+// bitmaps with backup server, so remove all and do full backup next
+bdrv_release_dirty_bitmap(di->bitmap);
+}
+
  g_free(di);
  
  qemu_mutex_unlock(_state.backup_mutex);

--
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC pve-qemu] Add systemd journal logging patch

2020-07-01 Thread Stefan Reiter

On 6/30/20 6:28 PM, Dietmar Maurer wrote:

comments inline


On 06/30/2020 2:06 PM Stefan Reiter  wrote:

  
Prints QEMU errors that occur *after* the "-daemonize" fork to the

systemd journal, instead of pushing them into /dev/null like before.

Signed-off-by: Stefan Reiter 
---

Useful for debugging rust panics for example. I'm sure there's other ways to go
about this (log files? pass the journal fd from outside? pipe it into the
journal somehow?) but this one seems simple enough, though it of course requires
linking QEMU against libsystemd.

@dietmar: is this similar to what you had in mind?

  debian/control|  1 +
  ...ct-stderr-to-journal-when-daemonized.patch | 50 +++
  debian/patches/series |  1 +
  3 files changed, 52 insertions(+)
  create mode 100644 
debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch

diff --git a/debian/control b/debian/control
index caceabb..e6d935d 100644
--- a/debian/control
+++ b/debian/control
@@ -25,6 +25,7 @@ Build-Depends: autotools-dev,
 libseccomp-dev,
 libspice-protocol-dev (>= 0.12.14~),
 libspice-server-dev (>= 0.14.0~),
+   libsystemd-dev,
 libusb-1.0-0-dev (>= 1.0.17-1),
 libusbredirparser-dev (>= 0.6-2),
 python3-minimal,
diff --git 
a/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch 
b/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch
new file mode 100644
index 000..f73de53
--- /dev/null
+++ 
b/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch
@@ -0,0 +1,50 @@
+From  Mon Sep 17 00:00:00 2001
+From: Stefan Reiter 
+Date: Tue, 30 Jun 2020 13:10:10 +0200
+Subject: [PATCH] PVE: redirect stderr to journal when daemonized
+
+QEMU uses the logging for error messages usually, so LOG_ERR is most
+fitting.
+---
+ Makefile.objs | 1 +
+ os-posix.c| 7 +--
+ 2 files changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/Makefile.objs b/Makefile.objs
+index b7d58e592e..105f23bff7 100644
+--- a/Makefile.objs
 b/Makefile.objs
+@@ -55,6 +55,7 @@ common-obj-y += net/
+ common-obj-y += qdev-monitor.o
+ common-obj-$(CONFIG_WIN32) += os-win32.o
+ common-obj-$(CONFIG_POSIX) += os-posix.o
++os-posix.o-libs := -lsystemd
+
+ common-obj-$(CONFIG_LINUX) += fsdev/
+
+diff --git a/os-posix.c b/os-posix.c
+index 3cd52e1e70..ab4d052c62 100644
+--- a/os-posix.c
 b/os-posix.c
+@@ -28,6 +28,8 @@
+ #include 
+ #include 
+ #include 
++#include 
++#include 
+
+ #include "qemu-common.h"
+ /* Needed early for CONFIG_BSD etc. */
+@@ -309,9 +311,10 @@ void os_setup_post(void)
+
+ dup2(fd, 0);
+ dup2(fd, 1);


I guess we also want to redirect stdout. Or does that produce too much noise?



I figured since QEMU doesn't redirect it to its log file either that it 
wouldn't produce anything useful on stdout?



+-/* In case -D is given do not redirect stderr to /dev/null */
++/* In case -D is given do not redirect stderr to journal */
+ if (!qemu_logfile) {
+-dup2(fd, 2);
++int journal_fd = sd_journal_stream_fd("QEMU", LOG_ERR, 0);
++dup2(journal_fd, 2);
+ }
+
+ close(fd);
diff --git a/debian/patches/series b/debian/patches/series
index 5d6a5d6..e658c1a 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -50,3 +50,4 @@ pve/0048-savevm-async-add-debug-timing-prints.patch
  pve/0049-Add-some-qemu_vfree-statements-to-prevent-memory-lea.patch
  pve/0050-Fix-backup-for-not-64k-aligned-storages.patch
  pve/0051-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
+pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch
--
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC pve-qemu] Add systemd journal logging patch

2020-06-30 Thread Stefan Reiter
Prints QEMU errors that occur *after* the "-daemonize" fork to the
systemd journal, instead of pushing them into /dev/null like before.

Signed-off-by: Stefan Reiter 
---

Useful for debugging rust panics for example. I'm sure there's other ways to go
about this (log files? pass the journal fd from outside? pipe it into the
journal somehow?) but this one seems simple enough, though it of course requires
linking QEMU against libsystemd.

@dietmar: is this similar to what you had in mind?

 debian/control|  1 +
 ...ct-stderr-to-journal-when-daemonized.patch | 50 +++
 debian/patches/series |  1 +
 3 files changed, 52 insertions(+)
 create mode 100644 
debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch

diff --git a/debian/control b/debian/control
index caceabb..e6d935d 100644
--- a/debian/control
+++ b/debian/control
@@ -25,6 +25,7 @@ Build-Depends: autotools-dev,
libseccomp-dev,
libspice-protocol-dev (>= 0.12.14~),
libspice-server-dev (>= 0.14.0~),
+   libsystemd-dev,
libusb-1.0-0-dev (>= 1.0.17-1),
libusbredirparser-dev (>= 0.6-2),
python3-minimal,
diff --git 
a/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch 
b/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch
new file mode 100644
index 000..f73de53
--- /dev/null
+++ 
b/debian/patches/pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch
@@ -0,0 +1,50 @@
+From  Mon Sep 17 00:00:00 2001
+From: Stefan Reiter 
+Date: Tue, 30 Jun 2020 13:10:10 +0200
+Subject: [PATCH] PVE: redirect stderr to journal when daemonized
+
+QEMU uses the logging for error messages usually, so LOG_ERR is most
+fitting.
+---
+ Makefile.objs | 1 +
+ os-posix.c| 7 +--
+ 2 files changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/Makefile.objs b/Makefile.objs
+index b7d58e592e..105f23bff7 100644
+--- a/Makefile.objs
 b/Makefile.objs
+@@ -55,6 +55,7 @@ common-obj-y += net/
+ common-obj-y += qdev-monitor.o
+ common-obj-$(CONFIG_WIN32) += os-win32.o
+ common-obj-$(CONFIG_POSIX) += os-posix.o
++os-posix.o-libs := -lsystemd
+ 
+ common-obj-$(CONFIG_LINUX) += fsdev/
+ 
+diff --git a/os-posix.c b/os-posix.c
+index 3cd52e1e70..ab4d052c62 100644
+--- a/os-posix.c
 b/os-posix.c
+@@ -28,6 +28,8 @@
+ #include 
+ #include 
+ #include 
++#include 
++#include 
+ 
+ #include "qemu-common.h"
+ /* Needed early for CONFIG_BSD etc. */
+@@ -309,9 +311,10 @@ void os_setup_post(void)
+ 
+ dup2(fd, 0);
+ dup2(fd, 1);
+-/* In case -D is given do not redirect stderr to /dev/null */
++/* In case -D is given do not redirect stderr to journal */
+ if (!qemu_logfile) {
+-dup2(fd, 2);
++int journal_fd = sd_journal_stream_fd("QEMU", LOG_ERR, 0);
++dup2(journal_fd, 2);
+ }
+ 
+ close(fd);
diff --git a/debian/patches/series b/debian/patches/series
index 5d6a5d6..e658c1a 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -50,3 +50,4 @@ pve/0048-savevm-async-add-debug-timing-prints.patch
 pve/0049-Add-some-qemu_vfree-statements-to-prevent-memory-lea.patch
 pve/0050-Fix-backup-for-not-64k-aligned-storages.patch
 pve/0051-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
+pve/0052-PVE-redirect-stderr-to-journal-when-daemonized.patch
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 common 1/3] JSONSchema: add format validator support and cleanup check_format

2020-06-25 Thread Stefan Reiter
Adds a third, optional parameter to register_format that allows specifying
a function that will be called after parsing and can validate the parsed
data. A validator should die on failed validation, and can also change the
parsed object by returning a modified version of it.

This is useful so one can register a format with its hash, thus allowing
documentation to be generated automatically, while still enforcing certain
validation rules.

The validator only needs to be called in parse_property_string, since
check_format always calls parse_property_string if there is a
possibility of a validator existing at all. parse_property_string should
then be called with named formats for best effect, as only then can
validators be used.

Clean up 'check_format' as well (which pretty much amounts to a rewrite).
No existing functionality is intentionally changed.

Signed-off-by: Stefan Reiter 
---
 src/PVE/JSONSchema.pm | 87 +++
 1 file changed, 55 insertions(+), 32 deletions(-)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 84fb694..f987006 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -121,19 +121,26 @@ register_standard_option('pve-snapshot-name', {
 });
 
 my $format_list = {};
+my $format_validators = {};
 
 sub register_format {
-my ($format, $code) = @_;
+my ($name, $format, $validator) = @_;
 
-die "JSON schema format '$format' already registered\n"
-   if $format_list->{$format};
+die "JSON schema format '$name' already registered\n"
+   if $format_list->{$name};
 
-$format_list->{$format} = $code;
+if ($validator) {
+   die "A \$validator function can only be specified for hash-based 
formats\n"
+   if ref($format) ne 'HASH';
+   $format_validators->{$name} = $validator;
+}
+
+$format_list->{$name} = $format;
 }
 
 sub get_format {
-my ($format) = @_;
-return $format_list->{$format};
+my ($name) = @_;
+return $format_list->{$name};
 }
 
 my $renderer_hash = {};
@@ -647,39 +654,47 @@ sub pve_verify_tfa_secret {
 sub check_format {
 my ($format, $value, $path) = @_;
 
-return parse_property_string($format, $value, $path) if ref($format) eq 
'HASH';
+if (ref($format) eq 'HASH') {
+   # hash ref cannot have validator/list/opt handling attached
+   return parse_property_string($format, $value, $path);
+}
+
+if (ref($format) eq 'CODE') {
+   # we are the (sole, old-style) validator
+   return $format->($value);
+}
+
 return if $format eq 'regex';
 
-if ($format =~ m/^(.*)-a?list$/) {
+my $parsed;
+$format =~ m/^(.*?)(?:-a?(list|opt))?$/;
+my ($format_name, $format_type) = ($1, $2 // 'none');
+my $registered = get_format($format_name);
+die "undefined format '$format'\n" if !$registered;
 
-   my $code = $format_list->{$1};
-
-   die "undefined format '$format'\n" if !$code;
+die "'-$format_type' format must have code ref, not hash\n"
+   if $format_type ne 'none' && ref($registered) ne 'CODE';
 
+if ($format_type eq 'list') {
# Note: we allow empty lists
foreach my $v (split_list($value)) {
-   &$code($v);
+   $parsed = $registered->($v);
}
-
-} elsif ($format =~ m/^(.*)-opt$/) {
-
-   my $code = $format_list->{$1};
-
-   die "undefined format '$format'\n" if !$code;
-
-   return if !$value; # allow empty string
-
-   &$code($value);
-
+} elsif ($format_type eq 'opt') {
+   $parsed = $registered->($value) if $value;
} else {
-
-   my $code = $format_list->{$format};
-
-   die "undefined format '$format'\n" if !$code;
-
-   return parse_property_string($code, $value, $path) if ref($code) eq 
'HASH';
-   &$code($value);
+   if (ref($registered) eq 'HASH') {
+   # Note: this is the only case where a validator function could be
+   # attached, hence it's safe to handle that in parse_property_string.
+   # We do however have to call it with $format_name instead of
+   # $registered, so it knows about the name (and thus any validators).
+   $parsed = parse_property_string($format, $value, $path);
+   } else {
+   $parsed = $registered->($value);
+   }
 }
+
+return $parsed;
 }
 
 sub parse_size {
@@ -735,9 +750,16 @@ sub parse_property_string {
 $additional_properties = 0 if !defined($additional_properties);
 
 # Support named formats here, too:
+my $validator;
 if (!ref($format)) {
-   if (my $desc = $format_list->{$format}) {
-   $format = $desc;
+   if (my $reg = get_format($format)) {
+   die "parse_property_string only accepts hash based named formats\n"
+   if ref($reg) ne 'HASH';
+
+   # named formats can have

[pve-devel] [PATCH v3 common 2/3] JSONSchema: use validator in print_property_string too

2020-06-25 Thread Stefan Reiter
Suggested-by: Fabian Grünbichler 
Signed-off-by: Stefan Reiter 
---
 src/PVE/JSONSchema.pm | 5 +
 1 file changed, 5 insertions(+)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index f987006..59a2b5a 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -1878,9 +1878,12 @@ sub generate_typetext {
 sub print_property_string {
 my ($data, $format, $skip, $path) = @_;
 
+my $validator;
 if (ref($format) ne 'HASH') {
my $schema = get_format($format);
die "not a valid format: $format\n" if !$schema;
+   # named formats can have validators attached
+   $validator = $format_validators->{$format};
$format = $schema;
 }
 
@@ -1890,6 +1893,8 @@ sub print_property_string {
raise "format error", errors => $errors;
 }
 
+$data = $validator->($data) if $validator;
+
 my ($default_key, $keyAliasProps) = &$find_schema_default_key($format);
 
 my $res = '';
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 0/3] Add format validator support

2020-06-25 Thread Stefan Reiter
Fixes "cpu" format documentation in man pages by introducing format validators,
which allow custom validation functions to be used together with format hashes.

The plan is to change occurances of hash based formats in parse_property_string
to named formats, so that future validators would be called. Such validators
would enable many formats which currently have no representation in the docs to
be included there.

v2 -> v3:
* include fixed followup patch from Fabian

v1 -> v2:
* Do validation in parse_property_string
* Re-refactor check_format based on Fabian's review
* Validate "cputype" in "pve-cpu-conf", not "pve-vm-cpu-conf"
* Use parse_property_string instead of check_format for parsing in CPUConfig


common: Stefan Reiter (2):
  JSONSchema: add format validator support and cleanup check_format
  JSONSchema: use validator in print_property_string too

 src/PVE/JSONSchema.pm | 92 ---
 1 file changed, 60 insertions(+), 32 deletions(-)

qemu-server: Stefan Reiter (1):
  fix #2671: include CPU format in man page again

 PVE/QemuServer/CPUConfig.pm | 56 -
 1 file changed, 18 insertions(+), 38 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 qemu-server 3/3] fix #2671: include CPU format in man page again

2020-06-25 Thread Stefan Reiter
Use the new register_format(3) call to use a validator (instead of a
parser) for 'pve-(vm-)?cpu-conf'. This way the $cpu_fmt hash can be used for
generating the documentation, while still applying the same verification
rules as before.

Since the function no longer parses but only verifies, the parsing in
print_cpu_device/get_cpu_options has to go via JSONSchema directly.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm | 56 -
 1 file changed, 18 insertions(+), 38 deletions(-)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 6250591..8ed898c 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -153,6 +153,7 @@ my $cpu_fmt = {
 'phys-bits' => {
type => 'string',
format => 'pve-phys-bits',
+   format_description => '8-64|host',
description => "The physical memory address bits that are reported to"
 . " the guest OS. Should be smaller or equal to the 
host's."
 . " Set to 'host' to use value from host CPU, but note 
that"
@@ -182,57 +183,36 @@ sub parse_phys_bits {
 
 # $cpu_fmt describes both the CPU config passed as part of a VM config, as well
 # as the definition of a custom CPU model. There are some slight differences
-# though, which we catch in the custom verification function below.
-PVE::JSONSchema::register_format('pve-cpu-conf', \_cpu_conf_basic);
-sub parse_cpu_conf_basic {
-my ($cpu_str, $noerr) = @_;
-
-my $cpu = eval { PVE::JSONSchema::parse_property_string($cpu_fmt, 
$cpu_str) };
-if ($@) {
-die $@ if !$noerr;
-return undef;
-}
+# though, which we catch in the custom validation functions below.
+PVE::JSONSchema::register_format('pve-cpu-conf', $cpu_fmt, 
\_cpu_conf);
+sub validate_cpu_conf {
+my ($cpu) = @_;
 
 # required, but can't be forced in schema since it's encoded in section
 # header for custom models
-if (!$cpu->{cputype}) {
-   die "CPU is missing cputype\n" if !$noerr;
-   return undef;
-}
-
-return $cpu;
+die "CPU is missing cputype\n" if !$cpu->{cputype};
 }
+PVE::JSONSchema::register_format('pve-vm-cpu-conf', $cpu_fmt, 
\_vm_cpu_conf);
+sub validate_vm_cpu_conf {
+my ($cpu) = @_;
 
-PVE::JSONSchema::register_format('pve-vm-cpu-conf', \_vm_cpu_conf);
-sub parse_vm_cpu_conf {
-my ($cpu_str, $noerr) = @_;
-
-my $cpu = parse_cpu_conf_basic($cpu_str, $noerr);
-return undef if !$cpu;
+validate_cpu_conf($cpu);
 
 my $cputype = $cpu->{cputype};
 
 # a VM-specific config is only valid if the cputype exists
 if (is_custom_model($cputype)) {
-   eval { get_custom_model($cputype); };
-   if ($@) {
-   die $@ if !$noerr;
-   return undef;
-   }
+   # dies on unknown model
+   get_custom_model($cputype);
 } else {
-   if (!defined($cpu_vendor_list->{$cputype})) {
-   die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
prefix?)\n" if !$noerr;
-   return undef;
-   }
+   die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
prefix?)\n"
+   if !defined($cpu_vendor_list->{$cputype});
 }
 
 # in a VM-specific config, certain properties are limited/forbidden
 
-if ($cpu->{flags} && $cpu->{flags} !~ 
m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/) {
-   die "VM-specific CPU flags must be a subset of: @{[join(', ', 
@supported_cpu_flags)]}\n"
-   if !$noerr;
-   return undef;
-}
+die "VM-specific CPU flags must be a subset of: @{[join(', ', 
@supported_cpu_flags)]}\n"
+   if ($cpu->{flags} && $cpu->{flags} !~ 
m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/);
 
 die "Property 'reported-model' not allowed in VM-specific CPU config.\n"
if defined($cpu->{'reported-model'});
@@ -369,7 +349,7 @@ sub print_cpu_device {
 my $kvm = $conf->{kvm} // 1;
 my $cpu = $kvm ? "kvm64" : "qemu64";
 if (my $cputype = $conf->{cpu}) {
-   my $cpuconf = parse_cpu_conf_basic($cputype)
+   my $cpuconf = PVE::JSONSchema::parse_property_string('pve-vm-cpu-conf', 
$cputype)
or die "Cannot parse cpu description: $cputype\n";
$cpu = $cpuconf->{cputype};
 
@@ -481,7 +461,7 @@ sub get_cpu_options {
 my $custom_cpu;
 my $hv_vendor_id;
 if (my $cpu_prop_str = $conf->{cpu}) {
-   $cpu = parse_vm_cpu_conf($cpu_prop_str)
+   $cpu = PVE::JSONSchema::parse_property_string('pve-vm-cpu-conf', 
$cpu_prop_str)
or die "Cannot parse cpu description: $cpu_prop_str\n";
 
$cputype = $cpu->{cputype};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 3/3] fix #2794: allow legacy IGD passthrough

2020-06-25 Thread Stefan Reiter

On 6/24/20 9:46 AM, Thomas Lamprecht wrote:

Am 6/22/20 um 10:17 AM schrieb Stefan Reiter:

@@ -89,7 +97,8 @@ sub get_pci_addr_map {
   $pci_addr_map = {
   piix3 => { bus => 0, addr => 1, conflict_ok => qw(ehci)  },
   ehci => { bus => 0, addr => 1, conflict_ok => qw(piix3) }, # instead of 
piix3 on arm
-vga => { bus => 0, addr => 2 },
+vga => { bus => 0, addr => 2, conflict_ok => qw(legacy-igd) },


sorry, but how do they conflict? the address differs so the check can't hit.
Or is this just for documentation purpose?



What do you mean the address differs? 'legacy-igd' and 'vga' both share bus 0 
addr 2, so without the conflict_ok the 'run_pci_addr_checks.pl' test fails. 
This is only save because legacy-igd requires vga=none anyway, as documented by 
the comment below.


+'legacy-igd' => { bus => 0, addr => 2, conflict_ok => qw(vga) }, # 
legacy-igd requires vga=none


ignore me, confused with non legacy IGD, sorry.



So the only thing left then is the description formatting. Should I send 
a v2 for that or do you want to do a followup?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 common 1/2] JSONSchema: add format validator support and cleanup check_format

2020-06-24 Thread Stefan Reiter

On 6/23/20 3:39 PM, Fabian Grünbichler wrote:

LGTM, what do you think about the following follow-up:

--8<-

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index b2ba9f7..d28143d 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -1878,9 +1878,12 @@ sub generate_typetext {
  sub print_property_string {
  my ($data, $format, $skip, $path) = @_;
  
+my $validator;

  if (ref($format) ne 'HASH') {
my $schema = get_format($format);
die "not a valid format: $format\n" if !$schema;
+   # named formats can have validators attached
+   $validator = $format_validators->{$format};
$format = $schema;
  }
  
@@ -1890,6 +1893,8 @@ sub print_property_string {

raise "format error", errors => $errors;
  }
  
+$res = $validator->($res) if $validator;


This would have to be $data, I suppose?


+
  my ($default_key, $keyAliasProps) = &$find_schema_default_key($format);
  
  my $res = '';


->8--

to ensure our code calling 'print_property_string' also adheres to the
format the validator enforces?



Fine by me, though I'm not sure how necessary. It implies that 
validators always validate positively on values they have already 
accepted (and potentially modified) once before though. Also I'm not 
sure I would use the validators result for print_property_string, since 
that means that the value the user passes in might not be the one 
printed (if a validator modifies it) - so maybe call the validator but 
throw away it's result?



it might also be nice to mark formats with a validator (at least for the
API dump) to make it obvious that the displayed format might be further
restricted.



Sounds good to me, I'll look into it.


I went through the code paths (again) and still think it might be
worthwhile to extend named formats to check_object as well, instead of
just scalar property string values. but that is a bigger follow-up, for
now limiting the scope of these validators to just property strings
seems sensible.



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu 2/2] Fix backup for not 64k-aligned storages

2020-06-22 Thread Stefan Reiter
Zero out clusters after the end of the device, this makes restore handle
it correctly (even if it may try to write those zeros, it won't fail and
just ignore the out-of-bounds write to disk).

For not even 4k-aligned disks, there is a potential buffer overrun in
the memcpy (since always 4k are copied), which causes host-memory
leakage into VMA archives. Fix this by always zeroing the affected area
in the output-buffer.

Reported-by: Roland Kammerer 
Suggested-by: Lars Ellenberg 
Signed-off-by: Stefan Reiter 
---

Thanks again for the detailed report and reproducer! It seems Lars' idea for a
fix was indeed correct, I also added the mentioned memset to not leak memory
even on non-4k-aligned disks as well as some cleanup on the patch you sent.

Tested with aligned VMs (also with small efidisks), as well as a specifically
unaligned one, which no longer exhibits the bug (tested on a vg with 1k extent
alignment in a loop file). Hexdump of the resulting vma shows no more memory
leakage. Would of course be grateful for further testing, especially if it fixes
the originally reported bug (the DRBD-related stuff).

 vma-writer.c | 23 ---
 1 file changed, 20 insertions(+), 3 deletions(-)

diff --git a/vma-writer.c b/vma-writer.c
index 06cbc02b1e..f5d2c5d23c 100644
--- a/vma-writer.c
+++ b/vma-writer.c
@@ -633,17 +633,33 @@ vma_writer_write(VmaWriter *vmaw, uint8_t dev_id, int64_t 
cluster_num,
 
 DPRINTF("VMA WRITE %d %zd\n", dev_id, cluster_num);
 
+uint64_t dev_size = vmaw->stream_info[dev_id].size;
 uint16_t mask = 0;
 
 if (buf) {
 int i;
 int bit = 1;
+uint64_t byte_offset = cluster_num * VMA_CLUSTER_SIZE;
 for (i = 0; i < 16; i++) {
 const unsigned char *vmablock = buf + (i*VMA_BLOCK_SIZE);
-if (!buffer_is_zero(vmablock, VMA_BLOCK_SIZE)) {
+
+// Note: If the source is not 64k-aligned, we might reach 4k blocks
+// after the end of the device. Always mark these as zero in the
+// mask, so the restore handles them correctly.
+if (byte_offset < dev_size &&
+!buffer_is_zero(vmablock, VMA_BLOCK_SIZE))
+{
 mask |= bit;
 memcpy(vmaw->outbuf + vmaw->outbuf_pos, vmablock,
VMA_BLOCK_SIZE);
+
+// prevent memory leakage on unaligned last block
+if (byte_offset + VMA_BLOCK_SIZE > dev_size) {
+uint64_t real_data_in_block = dev_size - byte_offset;
+memset(vmaw->outbuf + vmaw->outbuf_pos + 
real_data_in_block,
+   0, VMA_BLOCK_SIZE - real_data_in_block);
+}
+
 vmaw->outbuf_pos += VMA_BLOCK_SIZE;
 } else {
 DPRINTF("VMA WRITE %zd ZERO BLOCK %d\n", cluster_num, i);
@@ -651,6 +667,7 @@ vma_writer_write(VmaWriter *vmaw, uint8_t dev_id, int64_t 
cluster_num,
 *zero_bytes += VMA_BLOCK_SIZE;
 }
 
+byte_offset += VMA_BLOCK_SIZE;
 bit = bit << 1;
 }
 } else {
@@ -676,8 +693,8 @@ vma_writer_write(VmaWriter *vmaw, uint8_t dev_id, int64_t 
cluster_num,
 
 if (dev_id != vmaw->vmstate_stream) {
 uint64_t last = (cluster_num + 1) * VMA_CLUSTER_SIZE;
-if (last > vmaw->stream_info[dev_id].size) {
-uint64_t diff = last - vmaw->stream_info[dev_id].size;
+if (last > dev_size) {
+uint64_t diff = last - dev_size;
 if (diff >= VMA_CLUSTER_SIZE) {
 vma_writer_set_error(vmaw, "vma_writer_write: "
  "read after last cluster");
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu 1/2] Add some qemu_vfree statements to prevent memory leaks

2020-06-22 Thread Stefan Reiter
Suggested-by: Lars Ellenberg 
Signed-off-by: Stefan Reiter 
---
 vma-writer.c | 2 ++
 vma.c| 2 ++
 2 files changed, 4 insertions(+)

diff --git a/vma-writer.c b/vma-writer.c
index fe86b18a60..06cbc02b1e 100644
--- a/vma-writer.c
+++ b/vma-writer.c
@@ -767,5 +767,7 @@ void vma_writer_destroy(VmaWriter *vmaw)
 g_checksum_free(vmaw->md5csum);
 }
 
+qemu_vfree(vmaw->headerbuf);
+qemu_vfree(vmaw->outbuf);
 g_free(vmaw);
 }
diff --git a/vma.c b/vma.c
index a82752448a..2eea2fc281 100644
--- a/vma.c
+++ b/vma.c
@@ -565,6 +565,7 @@ out:
 g_warning("vma_writer_close failed %s", error_get_pretty(err));
 }
 }
+qemu_vfree(buf);
 }
 
 static int create_archive(int argc, char **argv)
@@ -732,6 +733,7 @@ static int create_archive(int argc, char **argv)
 g_error("creating vma archive failed");
 }
 
+vma_writer_destroy(vmaw);
 return 0;
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 3/3] fix #2794: allow legacy IGD passthrough

2020-06-22 Thread Stefan Reiter

On 6/18/20 5:00 PM, Thomas Lamprecht wrote:

Am 6/18/20 um 4:36 PM schrieb Stefan Reiter:

Legacy IGD passthrough requires address 00:1f.0 to not be assigned to
anything on QEMU startup (currently it's assigned to bridge pci.2).
Changing this in general would break live-migration, so introduce a new
hostpci parameter "legacy-igd", which if set to 1 will move that bridge
to be nested under bridge 1.

This is safe because:
* Bridge 1 is unconditionally created on i440fx, so nesting is ok
* Defaults are not changed, i.e. PCI layout only changes when the new
parameter is specified manually
* hostpci forbids migration anyway

Additionally, the PT device has to be assigned address 00:02.0 in the
guest as well, which is usually used for VGA assignment. Luckily, IGD PT
requires vga=none, so that is not an issue either.

See https://git.qemu.org/?p=qemu.git;a=blob;f=docs/igd-assign.txt

Signed-off-by: Stefan Reiter 
---
  PVE/QemuServer.pm | 10 --
  PVE/QemuServer/PCI.pm | 45 +--
  2 files changed, 47 insertions(+), 8 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1c08222..1abe64b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3102,7 +3102,7 @@ sub config_to_command {
  }
  
  # host pci device passthrough

-my ($kvm_off, $gpu_passthrough) = 
PVE::QemuServer::PCI::print_hostpci_devices(
+my ($kvm_off, $gpu_passthrough, $legacy_igd) = 
PVE::QemuServer::PCI::print_hostpci_devices(
$conf, $devices, $winversion, $q35, $bridges, $arch, $machine_type);
  
  # usb devices

@@ -3458,7 +3458,13 @@ sub config_to_command {
  
  for my $k (sort {$b cmp $a} keys %$bridges) {

next if $q35 && $k < 4; # q35.cfg already includes bridges up to 3
-   $pciaddr = print_pci_addr("pci.$k", undef, $arch, $machine_type);
+
+   my $k_name = $k;
+   if ($k == 2 && $legacy_igd) {
+   $k_name = "$k-igd";
+   }
+   $pciaddr = print_pci_addr("pci.$k_name", undef, $arch, $machine_type);
+


ugh, hacks all the way down, but well it seems that pass through stuff relies 
on that -.-



The entire legacy-igd thing is pretty much a hack (on QEMUs side too I 
mean), and I don't really see a way around this part if we do want to 
support it...


The $k_name thing could be done away with if we hardcode the address 
here, but I'd rather have it in the pci_addr_map, so we know to not use 
it in the future.



my $devstr = "pci-bridge,id=pci.$k,chassis_nr=$k$pciaddr";
if ($q35) {
# add after -readconfig pve-q35.cfg
diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 39f9970..e835f39 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -55,6 +55,14 @@ EODESCR
optional => 1,
default => 0,
  },
+'legacy-igd' => {
+   type => 'boolean',
+   description => "Pass this device in legacy IGD mode (allows required"
+. " 1f.0 PCI bridge and assigns correct address)."
+. " Requires i440fx machine type and VGA set to 'none'.",


I'd rather use 100 cc lines for that and do not align subsequent concatenated 
lines indentation
but just indent them 4 spaces more than the prev level.



Ok


+   optional => 1,
+   default => 0,
+},
  'mdev' => {
type => 'string',
  format_description => 'string',
@@ -89,7 +97,8 @@ sub get_pci_addr_map {
  $pci_addr_map = {
piix3 => { bus => 0, addr => 1, conflict_ok => qw(ehci)  },
ehci => { bus => 0, addr => 1, conflict_ok => qw(piix3) }, # instead of 
piix3 on arm
-   vga => { bus => 0, addr => 2 },
+   vga => { bus => 0, addr => 2, conflict_ok => qw(legacy-igd) },


sorry, but how do they conflict? the address differs so the check can't hit.
Or is this just for documentation purpose?



What do you mean the address differs? 'legacy-igd' and 'vga' both share 
bus 0 addr 2, so without the conflict_ok the 'run_pci_addr_checks.pl' 
test fails. This is only save because legacy-igd requires vga=none 
anyway, as documented by the comment below.



+   'legacy-igd' => { bus => 0, addr => 2, conflict_ok => qw(vga) }, # 
legacy-igd requires vga=none
balloon0 => { bus => 0, addr => 3 },
watchdog => { bus => 0, addr => 4 },
scsihw0 => { bus => 0, addr => 5, conflict_ok => qw(pci.3) },
@@ -149,6 +158,7 @@ sub get_pci_addr_map {
'xhci' => { bus => 1, addr => 27 },
'pci.4' => { bus => 1, addr => 28 },
'rng0' => { bus => 1, addr => 29 },
+   'pci.2-igd' => { bus => 1, addr => 30 }, # replaces pci.2 in case a 
legacy IGD device is passed through
'virtio6' => { bus => 2, addr => 1 },
'vir

[pve-devel] [PATCH qemu 1/3] fix #2794: Include legacy-igd passthrough fix

2020-06-18 Thread Stefan Reiter
See https://bugs.launchpad.net/qemu/+bug/1882784

Signed-off-by: Stefan Reiter 
---
 ...ks-Fix-broken-legacy-IGD-passthrough.patch | 34 +++
 debian/patches/series |  1 +
 2 files changed, 35 insertions(+)
 create mode 100644 
debian/patches/extra/0001-hw-vfio-pci-quirks-Fix-broken-legacy-IGD-passthrough.patch

diff --git 
a/debian/patches/extra/0001-hw-vfio-pci-quirks-Fix-broken-legacy-IGD-passthrough.patch
 
b/debian/patches/extra/0001-hw-vfio-pci-quirks-Fix-broken-legacy-IGD-passthrough.patch
new file mode 100644
index 000..2d6aa71
--- /dev/null
+++ 
b/debian/patches/extra/0001-hw-vfio-pci-quirks-Fix-broken-legacy-IGD-passthrough.patch
@@ -0,0 +1,34 @@
+From  Mon Sep 17 00:00:00 2001
+From: Thomas Huth 
+Date: Thu, 11 Jun 2020 11:36:40 -0600
+Subject: [PATCH] hw/vfio/pci-quirks: Fix broken legacy IGD passthrough
+
+The #ifdef CONFIG_VFIO_IGD in pci-quirks.c is not working since the
+required header config-devices.h is not included, so that the legacy
+IGD passthrough is currently broken. Let's include the right header
+to fix this issue.
+
+Buglink: https://bugs.launchpad.net/qemu/+bug/1882784
+Fixes: 29d62771c81d ("hw/vfio: Move the IGD quirk code to a separate file")
+Signed-off-by: Thomas Huth 
+Reviewed-by: Philippe Mathieu-Daudé 
+Signed-off-by: Alex Williamson 
+---
+ hw/vfio/pci-quirks.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
+index 2d348f8237..656098b827 100644
+--- a/hw/vfio/pci-quirks.c
 b/hw/vfio/pci-quirks.c
+@@ -11,6 +11,7 @@
+  */
+ 
+ #include "qemu/osdep.h"
++#include "config-devices.h"
+ #include "exec/memop.h"
+ #include "qemu/units.h"
+ #include "qemu/error-report.h"
+-- 
+2.20.1
+
diff --git a/debian/patches/series b/debian/patches/series
index e7345ce..f7e7197 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -1,3 +1,4 @@
+extra/0001-hw-vfio-pci-quirks-Fix-broken-legacy-IGD-passthrough.patch
 pve/0001-PVE-Config-block-file-change-locking-default-to-off.patch
 pve/0002-PVE-Config-Adjust-network-script-path-to-etc-kvm.patch
 pve/0003-PVE-Config-set-the-CPU-model-to-kvm64-32-instead-of-.patch
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 3/3] fix #2794: allow legacy IGD passthrough

2020-06-18 Thread Stefan Reiter
Legacy IGD passthrough requires address 00:1f.0 to not be assigned to
anything on QEMU startup (currently it's assigned to bridge pci.2).
Changing this in general would break live-migration, so introduce a new
hostpci parameter "legacy-igd", which if set to 1 will move that bridge
to be nested under bridge 1.

This is safe because:
* Bridge 1 is unconditionally created on i440fx, so nesting is ok
* Defaults are not changed, i.e. PCI layout only changes when the new
parameter is specified manually
* hostpci forbids migration anyway

Additionally, the PT device has to be assigned address 00:02.0 in the
guest as well, which is usually used for VGA assignment. Luckily, IGD PT
requires vga=none, so that is not an issue either.

See https://git.qemu.org/?p=qemu.git;a=blob;f=docs/igd-assign.txt

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 10 --
 PVE/QemuServer/PCI.pm | 45 +--
 2 files changed, 47 insertions(+), 8 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1c08222..1abe64b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3102,7 +3102,7 @@ sub config_to_command {
 }
 
 # host pci device passthrough
-my ($kvm_off, $gpu_passthrough) = 
PVE::QemuServer::PCI::print_hostpci_devices(
+my ($kvm_off, $gpu_passthrough, $legacy_igd) = 
PVE::QemuServer::PCI::print_hostpci_devices(
$conf, $devices, $winversion, $q35, $bridges, $arch, $machine_type);
 
 # usb devices
@@ -3458,7 +3458,13 @@ sub config_to_command {
 
 for my $k (sort {$b cmp $a} keys %$bridges) {
next if $q35 && $k < 4; # q35.cfg already includes bridges up to 3
-   $pciaddr = print_pci_addr("pci.$k", undef, $arch, $machine_type);
+
+   my $k_name = $k;
+   if ($k == 2 && $legacy_igd) {
+   $k_name = "$k-igd";
+   }
+   $pciaddr = print_pci_addr("pci.$k_name", undef, $arch, $machine_type);
+
my $devstr = "pci-bridge,id=pci.$k,chassis_nr=$k$pciaddr";
if ($q35) {
# add after -readconfig pve-q35.cfg
diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 39f9970..e835f39 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -55,6 +55,14 @@ EODESCR
optional => 1,
default => 0,
 },
+'legacy-igd' => {
+   type => 'boolean',
+   description => "Pass this device in legacy IGD mode (allows required"
+. " 1f.0 PCI bridge and assigns correct address)."
+. " Requires i440fx machine type and VGA set to 'none'.",
+   optional => 1,
+   default => 0,
+},
 'mdev' => {
type => 'string',
 format_description => 'string',
@@ -89,7 +97,8 @@ sub get_pci_addr_map {
 $pci_addr_map = {
piix3 => { bus => 0, addr => 1, conflict_ok => qw(ehci)  },
ehci => { bus => 0, addr => 1, conflict_ok => qw(piix3) }, # instead of 
piix3 on arm
-   vga => { bus => 0, addr => 2 },
+   vga => { bus => 0, addr => 2, conflict_ok => qw(legacy-igd) },
+   'legacy-igd' => { bus => 0, addr => 2, conflict_ok => qw(vga) }, # 
legacy-igd requires vga=none
balloon0 => { bus => 0, addr => 3 },
watchdog => { bus => 0, addr => 4 },
scsihw0 => { bus => 0, addr => 5, conflict_ok => qw(pci.3) },
@@ -149,6 +158,7 @@ sub get_pci_addr_map {
'xhci' => { bus => 1, addr => 27 },
'pci.4' => { bus => 1, addr => 28 },
'rng0' => { bus => 1, addr => 29 },
+   'pci.2-igd' => { bus => 1, addr => 30 }, # replaces pci.2 in case a 
legacy IGD device is passed through
'virtio6' => { bus => 2, addr => 1 },
'virtio7' => { bus => 2, addr => 2 },
'virtio8' => { bus => 2, addr => 3 },
@@ -351,6 +361,7 @@ sub print_hostpci_devices {
 
 my $kvm_off = 0;
 my $gpu_passthrough = 0;
+my $legacy_igd = 0;
 
 for (my $i = 0; $i < $MAX_HOSTPCI_DEVICES; $i++)  {
my $id = "hostpci$i";
@@ -372,7 +383,32 @@ sub print_hostpci_devices {
$pciaddr = print_pcie_addr($id);
}
} else {
-   $pciaddr = print_pci_addr($id, $bridges, $arch, $machine_type);
+   my $pci_name = $d->{'legacy-igd'} ? 'legacy-igd' : $id;
+   $pciaddr = print_pci_addr($pci_name, $bridges, $arch, 
$machine_type);
+   }
+
+   my $pcidevices = $d->{pciid};
+   my $multifunction = 1 if @$pcidevices > 1;
+
+   if ($d->{'legacy-igd'}) {
+   die "only one device can be assigned in legacy-igd mode\n"
+   if $legacy_igd;
+   $legacy_igd = 1;
+
+   die "legacy IGD assignment requires VGA mode to be 'none'\n"
+   

[pve-devel] [PATCH 0/3] fix #2794: Allow legacy-igd passthrough assignment

2020-06-18 Thread Stefan Reiter
Some Intel iGPUs support a special passthrough mode called "legacy-igd
passthrough" which requires certain conditions to be true.

The patches in this series include a fix for a regression in QEMU 5.0.0 which
was already accepted upstream for 5.1, but is very straightforward, so it can be
picked up by us in the meantime.

The qemu-server part first moves all relevant 'hostpci' code to PCI.pm to avoid
adding to the mess that is config_to_command, then adds support for legacy-igd
by shuffling around the required PCI address assignments.

A test version of the qemu-server patches in combination with a custom build of
QEMU that includes the backported fix has been tested on supported hardware by
bugzilla user Shak (see bug #2794). Much thanks!


qemu: Stefan Reiter (1):
  fix #2794: Include legacy-igd passthrough fix

 ...ks-Fix-broken-legacy-IGD-passthrough.patch | 34 +++
 debian/patches/series |  1 +
 2 files changed, 35 insertions(+)
 create mode 100644 
debian/patches/extra/0001-hw-vfio-pci-quirks-Fix-broken-legacy-IGD-passthrough.patch

qemu-server: Stefan Reiter (2):
  cfg2cmd: hostpci: move code to PCI.pm
  fix #2794: allow legacy IGD passthrough

 PVE/QemuServer.pm | 178 +++-
 PVE/QemuServer/PCI.pm | 205 +-
 2 files changed, 218 insertions(+), 165 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/3] cfg2cmd: hostpci: move code to PCI.pm

2020-06-18 Thread Stefan Reiter
To avoid further cluttering config_to_command with subsequent changes.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 170 ++
 PVE/QemuServer/PCI.pm | 170 ++
 2 files changed, 177 insertions(+), 163 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index fa71b25..1c08222 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -48,7 +48,7 @@ use PVE::QemuServer::Drive qw(is_valid_drivename 
drive_is_cloudinit drive_is_cdr
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory;
 use PVE::QemuServer::Monitor qw(mon_cmd);
-use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr 
print_pcie_root_port);
+use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr 
print_pcie_root_port parse_hostpci);
 use PVE::QemuServer::USB qw(parse_usb_device);
 
 my $have_sdn;
@@ -768,7 +768,6 @@ while (my ($k, $v) = each %$confdesc) {
 
 my $MAX_USB_DEVICES = 5;
 my $MAX_NETS = 32;
-my $MAX_HOSTPCI_DEVICES = 16;
 my $MAX_SERIAL_PORTS = 4;
 my $MAX_PARALLEL_PORTS = 3;
 my $MAX_NUMA = 8;
@@ -1011,76 +1010,6 @@ my $usbdesc = {
 };
 PVE::JSONSchema::register_standard_option("pve-qm-usb", $usbdesc);
 
-my $PCIRE = qr/([a-f0-9]{4}:)?[a-f0-9]{2}:[a-f0-9]{2}(?:\.[a-f0-9])?/;
-my $hostpci_fmt = {
-host => {
-   default_key => 1,
-   type => 'string',
-   pattern => qr/$PCIRE(;$PCIRE)*/,
-   format_description => 'HOSTPCIID[;HOSTPCIID2...]',
-   description => < {
-   type => 'boolean',
-description =>  "Specify whether or not the device's ROM will be 
visible in the guest's memory map.",
-   optional => 1,
-   default => 1,
-},
-romfile => {
-type => 'string',
-pattern => '[^,;]+',
-format_description => 'string',
-description => "Custom pci device rom filename (must be located in 
/usr/share/kvm/).",
-optional => 1,
-},
-pcie => {
-   type => 'boolean',
-description =>  "Choose the PCI-express bus (needs the 'q35' machine 
model).",
-   optional => 1,
-   default => 0,
-},
-'x-vga' => {
-   type => 'boolean',
-description =>  "Enable vfio-vga device support.",
-   optional => 1,
-   default => 0,
-},
-'mdev' => {
-   type => 'string',
-format_description => 'string',
-   pattern => '[^/\.:]+',
-   optional => 1,
-   description => < 1,
-type => 'string', format => 'pve-qm-hostpci',
-description => "Map host PCI devices into guest.",
-   verbose_description =>  < 1,
type => 'string',
@@ -1119,8 +1048,8 @@ for (my $i = 0; $i < $MAX_SERIAL_PORTS; $i++)  {
 $confdesc->{"serial$i"} = $serialdesc;
 }
 
-for (my $i = 0; $i < $MAX_HOSTPCI_DEVICES; $i++)  {
-$confdesc->{"hostpci$i"} = $hostpcidesc;
+for (my $i = 0; $i < $PVE::QemuServer::PCI::MAX_HOSTPCI_DEVICES; $i++)  {
+$confdesc->{"hostpci$i"} = $PVE::QemuServer::PCI::hostpcidesc;
 }
 
 for my $key (keys %{$PVE::QemuServer::Drive::drivedesc_hash}) {
@@ -1756,23 +1685,6 @@ sub parse_numa {
 return $res;
 }
 
-sub parse_hostpci {
-my ($value) = @_;
-
-return undef if !$value;
-
-my $res = PVE::JSONSchema::parse_property_string($hostpci_fmt, $value);
-
-my @idlist = split(/;/, $res->{host});
-delete $res->{host};
-foreach my $id (@idlist) {
-   my $devs = PVE::SysFSTools::lspci($id);
-   die "no PCI device found for '$id'\n" if !scalar(@$devs);
-   push @{$res->{pciid}}, @$devs;
-}
-return $res;
-}
-
 # netX: e1000=XX:XX:XX:XX:XX:XX,bridge=vmbr0,rate=
 sub parse_net {
 my ($data) = @_;
@@ -3189,77 +3101,9 @@ sub config_to_command {
push @$devices, '-device', $kbd if defined($kbd);
 }
 
-my $kvm_off = 0;
-my $gpu_passthrough;
-
-# host pci devices
-for (my $i = 0; $i < $MAX_HOSTPCI_DEVICES; $i++)  {
-   my $id = "hostpci$i";
-   my $d = parse_hostpci($conf->{$id});
-   next if !$d;
-
-   if (my $pcie = $d->{pcie}) {
-   die "q35 machine model is not enabled" if !$q35;
-   # win7 wants to have the pcie devices directly on the pcie bus
-   # instead of in the root port
-   if ($winversion == 7) {
-   $pciaddr = print_pcie_addr("${id}bus0");
-   } else {
-   # add more root ports if needed, 4 are present by default
-   # by pve-q35 cfgs, rest added here on demand.
-   if ($i > 3) {
-   push @$devices, '-device', print_pcie_root_port($i);
-   }
-   $pciaddr = print_pcie_addr($id);
-   }
-   } else {
-   $pciaddr = print_pci_addr($id, $bridge

[pve-devel] [PATCH v2 qemu-server 3/3] Add man page cpu-models.conf(5)

2020-06-18 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Requires patched pve-docs.

 Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index cb95044..0bf3d96 100644
--- a/Makefile
+++ b/Makefile
@@ -52,7 +52,7 @@ qmrestore.zsh-completion:
mv $@.tmp $@
 
 PKGSOURCES=qm qm.1 qmrestore qmrestore.1 qmextract qm.conf.5 
qm.bash-completion qmrestore.bash-completion \
-   qm.zsh-completion qmrestore.zsh-completion
+   qm.zsh-completion qmrestore.zsh-completion cpu-models.conf.5
 
 .PHONY: install
 install: ${PKGSOURCES}
@@ -76,6 +76,7 @@ install: ${PKGSOURCES}
install -m 0755 qmextract ${DESTDIR}${LIBDIR}
install -m 0644 qm.1 ${DESTDIR}/${MAN1DIR}
install -m 0644 qmrestore.1 ${DESTDIR}/${MAN1DIR}
+   install -m 0644 cpu-models.conf.5 ${DESTDIR}/${MAN5DIR}
install -m 0644 qm.conf.5 ${DESTDIR}/${MAN5DIR}
cd ${DESTDIR}/${MAN5DIR}; ln -s -f qm.conf.5.gz vm.conf.5.gz
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 docs 1/3] Add man-page and notes about custom CPU models

2020-06-18 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Requires qemu-server with patch 2 applied.

v2:
* improved wording from Aaron's review
Other patches are unchanged.

 Makefile  |  1 +
 cpu-models.conf.adoc  | 82 +++
 gen-cpu-models.conf.5-opts.pl | 14 ++
 qm.adoc   | 11 +
 4 files changed, 108 insertions(+)
 create mode 100644 cpu-models.conf.adoc
 create mode 100755 gen-cpu-models.conf.5-opts.pl

diff --git a/Makefile b/Makefile
index ca56e96..98b6b44 100644
--- a/Makefile
+++ b/Makefile
@@ -53,6 +53,7 @@ GEN_SCRIPTS=  \
gen-pct-network-opts.pl \
gen-pct-mountpoint-opts.pl  \
gen-qm.conf.5-opts.pl   \
+   gen-cpu-models.conf.5-opts.pl   \
gen-qm-cloud-init-opts.pl   \
gen-vzdump.conf.5-opts.pl   \
gen-pve-firewall-cluster-opts.pl\
diff --git a/cpu-models.conf.adoc b/cpu-models.conf.adoc
new file mode 100644
index 000..7b48a6e
--- /dev/null
+++ b/cpu-models.conf.adoc
@@ -0,0 +1,82 @@
+ifdef::manvolnum[]
+cpu-models.conf(5)
+==
+:pve-toplevel:
+
+NAME
+
+
+cpu-models.conf - Custom CPU model configuration file
+
+
+SYNOPSIS
+
+
+'/etc/pve/virtual-guest/cpu-models.conf'
+
+
+DESCRIPTION
+---
+endif::manvolnum[]
+
+ifndef::manvolnum[]
+Custom CPU Model Configuration
+==
+endif::manvolnum[]
+ifdef::wiki[]
+:pve-toplevel:
+:title: Manual: cpu-models.conf
+endif::wiki[]
+
+The `/etc/pve/virtual-guest/cpu-models.conf` file stores custom CPU
+models, which can be used by VMs to get access to advanced CPU
+features (for example custom CPU flags).
+
+
+File Format
+---
+
+CPU models each have their own section in the file, beginning with
+the header:
+
+ cpu-model: 
+
+Note that  does not include the 'custom-' prefix, which is
+required in VM configs to denote custom CPU models.
+
+For example, if the  is 'foobar', the CPU for a VM would need to be
+configured as 'custom-foobar'.
+
+Each section can specify several options. They are indented by either one TAB
+character or multiple spaces. Every option and its value is separated by one
+space, for example:
+
+ reported-model qemu64
+
+See below for all available options.
+
+Blank lines and those starting with a `#` are ignored.
+
+
+Options
+---
+
+include::cpu-models.conf.5-opts.adoc[]
+
+
+Example File
+
+
+
+cpu-model: avx
+flags +avx;+avx2
+phys-bits host
+hidden 0
+hv-vendor-id proxmox
+reported-model kvm64
+
+
+
+ifdef::manvolnum[]
+include::pve-copyright.adoc[]
+endif::manvolnum[]
diff --git a/gen-cpu-models.conf.5-opts.pl b/gen-cpu-models.conf.5-opts.pl
new file mode 100755
index 000..7c35d69
--- /dev/null
+++ b/gen-cpu-models.conf.5-opts.pl
@@ -0,0 +1,14 @@
+#!/usr/bin/perl
+
+use lib '.';
+use strict;
+use warnings;
+use PVE::RESTHandler;
+use PVE::QemuServer::CPUConfig;
+
+my $prop = PVE::QemuServer::CPUConfig::add_cpu_json_properties({});
+
+# cputype is given as section header and explained seperately
+delete $prop->{cputype};
+
+print PVE::RESTHandler::dump_properties($prop);
diff --git a/qm.adoc b/qm.adoc
index cd6641f..45832e9 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -352,6 +352,17 @@ the kvm64 default. If you don’t care about live migration 
or have a homogeneou
 cluster where all nodes have the same CPU, set the CPU type to host, as in
 theory this will give your guests maximum performance.
 
+Custom CPU Types
+
+
+You can specify custom CPU types with a configurable set of features. These are
+maintained in the configuration file `/etc/pve/virtual-guest/cpu-models.conf` 
by
+an administrator. See `man cpu-models.conf` for format details.
+
+Specified custom types can be selected by any user with the `Sys.Audit`
+privilege on `/nodes`. When configuring a custom CPU type for a VM via the CLI
+or API, the name needs to be prefixed with 'custom-'.
+
 Meltdown / Spectre related CPU flags
 
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 2/3] CPUConfig: add add_cpu_json_properties()

2020-06-18 Thread Stefan Reiter
Useful for APIs and docs.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index b884498..6250591 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -293,6 +293,16 @@ sub write_config {
 $class->SUPER::write_config($filename, $cfg);
 }
 
+sub add_cpu_json_properties {
+my ($prop) = @_;
+
+foreach my $opt (keys %$cpu_fmt) {
+   $prop->{$opt} = $cpu_fmt->{$opt};
+}
+
+return $prop;
+}
+
 sub get_cpu_models {
 my ($include_custom) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs 1/3] Add man-page and notes about custom CPU models

2020-06-18 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Requires qemu-server with patch 2 applied.

 Makefile  |  1 +
 cpu-models.conf.adoc  | 82 +++
 gen-cpu-models.conf.5-opts.pl | 14 ++
 qm.adoc   | 11 +
 4 files changed, 108 insertions(+)
 create mode 100644 cpu-models.conf.adoc
 create mode 100755 gen-cpu-models.conf.5-opts.pl

diff --git a/Makefile b/Makefile
index ca56e96..98b6b44 100644
--- a/Makefile
+++ b/Makefile
@@ -53,6 +53,7 @@ GEN_SCRIPTS=  \
gen-pct-network-opts.pl \
gen-pct-mountpoint-opts.pl  \
gen-qm.conf.5-opts.pl   \
+   gen-cpu-models.conf.5-opts.pl   \
gen-qm-cloud-init-opts.pl   \
gen-vzdump.conf.5-opts.pl   \
gen-pve-firewall-cluster-opts.pl\
diff --git a/cpu-models.conf.adoc b/cpu-models.conf.adoc
new file mode 100644
index 000..02f1834
--- /dev/null
+++ b/cpu-models.conf.adoc
@@ -0,0 +1,82 @@
+ifdef::manvolnum[]
+cpu-models.conf(5)
+==
+:pve-toplevel:
+
+NAME
+
+
+cpu-models.conf - Custom CPU model configuration file
+
+
+SYNOPSIS
+
+
+'/etc/pve/virtual-guest/cpu-models.conf'
+
+
+DESCRIPTION
+---
+endif::manvolnum[]
+
+ifndef::manvolnum[]
+Custom CPU Model Configuration
+==
+endif::manvolnum[]
+ifdef::wiki[]
+:pve-toplevel:
+:title: Manual: cpu-models.conf
+endif::wiki[]
+
+The `/etc/pve/virtual-guest/cpu-models.conf` file stores custom CPU
+models, which can be used by VMs to get access to advanced CPU
+features (e.g. custom CPU flags).
+
+
+File Format
+---
+
+CPU models each have their own section in the file, beginning with
+the header:
+
+ cpu-model: 
+
+Note that  does not include the 'custom-' prefix, which is
+required in VM configs to denote custom CPU models.
+
+E.g. when  is 'example', a VM can use the model as
+'custom-example'.
+
+Each section can specify options indented by one TAB character
+or multiple spaces and seperated from the value with a space, e.g.:
+
+ reported-model qemu64
+
+See below for all available options.
+
+Blank lines in the file are ignored, and lines starting with a `#`
+character are treated as comments and are also ignored.
+
+
+Options
+---
+
+include::cpu-models.conf.5-opts.adoc[]
+
+
+Example File
+
+
+
+cpu-model: avx
+flags +avx;+avx2
+phys-bits host
+hidden 0
+hv-vendor-id proxmox
+reported-model kvm64
+
+
+
+ifdef::manvolnum[]
+include::pve-copyright.adoc[]
+endif::manvolnum[]
diff --git a/gen-cpu-models.conf.5-opts.pl b/gen-cpu-models.conf.5-opts.pl
new file mode 100755
index 000..7c35d69
--- /dev/null
+++ b/gen-cpu-models.conf.5-opts.pl
@@ -0,0 +1,14 @@
+#!/usr/bin/perl
+
+use lib '.';
+use strict;
+use warnings;
+use PVE::RESTHandler;
+use PVE::QemuServer::CPUConfig;
+
+my $prop = PVE::QemuServer::CPUConfig::add_cpu_json_properties({});
+
+# cputype is given as section header and explained seperately
+delete $prop->{cputype};
+
+print PVE::RESTHandler::dump_properties($prop);
diff --git a/qm.adoc b/qm.adoc
index cd6641f..f94140f 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -352,6 +352,17 @@ the kvm64 default. If you don’t care about live migration 
or have a homogeneou
 cluster where all nodes have the same CPU, set the CPU type to host, as in
 theory this will give your guests maximum performance.
 
+Custom CPU Types
+
+
+You can also specify custom CPU types with a configurable set of features. 
These
+are specified in the configuration file 
`/etc/pve/virtual-guest/cpu-models.conf`
+by an administrator. See `man cpu-models.conf` for format details.
+
+Once specified, custom types can easily be selected via the GUI by any user 
with
+`Sys.Audit` permissions on `/nodes` or via the CLI, though the latter (as well
+as the API) requires the name to be prefixed with 'custom-'.
+
 Meltdown / Spectre related CPU flags
 
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/3] CPUConfig: add add_cpu_json_properties()

2020-06-18 Thread Stefan Reiter
Useful for APIs and docs.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index b884498..6250591 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -293,6 +293,16 @@ sub write_config {
 $class->SUPER::write_config($filename, $cfg);
 }
 
+sub add_cpu_json_properties {
+my ($prop) = @_;
+
+foreach my $opt (keys %$cpu_fmt) {
+   $prop->{$opt} = $cpu_fmt->{$opt};
+}
+
+return $prop;
+}
+
 sub get_cpu_models {
 my ($include_custom) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 3/3] Add man page cpu-models.conf(5)

2020-06-18 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Requires updated pve-docs.

 Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index cb95044..0bf3d96 100644
--- a/Makefile
+++ b/Makefile
@@ -52,7 +52,7 @@ qmrestore.zsh-completion:
mv $@.tmp $@
 
 PKGSOURCES=qm qm.1 qmrestore qmrestore.1 qmextract qm.conf.5 
qm.bash-completion qmrestore.bash-completion \
-   qm.zsh-completion qmrestore.zsh-completion
+   qm.zsh-completion qmrestore.zsh-completion cpu-models.conf.5
 
 .PHONY: install
 install: ${PKGSOURCES}
@@ -76,6 +76,7 @@ install: ${PKGSOURCES}
install -m 0755 qmextract ${DESTDIR}${LIBDIR}
install -m 0644 qm.1 ${DESTDIR}/${MAN1DIR}
install -m 0644 qmrestore.1 ${DESTDIR}/${MAN1DIR}
+   install -m 0644 cpu-models.conf.5 ${DESTDIR}/${MAN5DIR}
install -m 0644 qm.conf.5 ${DESTDIR}/${MAN5DIR}
cd ${DESTDIR}/${MAN5DIR}; ln -s -f qm.conf.5.gz vm.conf.5.gz
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 common 1/2] JSONSchema: add format validator support and cleanup check_format

2020-06-17 Thread Stefan Reiter
Adds a third, optional parameter to register_format that allows specifying
a function that will be called after parsing and can validate the parsed
data. A validator should die on failed validation, and can also change the
parsed object by returning a modified version of it.

This is useful so one can register a format with its hash, thus allowing
documentation to be generated automatically, while still enforcing certain
validation rules.

The validator only needs to be called in parse_property_string, since
check_format always calls parse_property_string if there is a
possibility of a validator existing at all. parse_property_string should
then be called with named formats for best effect, as only then can
validators be used.

Clean up 'check_format' as well (which pretty much amounts to a rewrite).
No existing functionality is intentionally changed.

Signed-off-by: Stefan Reiter 
---
 src/PVE/JSONSchema.pm | 87 +++
 1 file changed, 55 insertions(+), 32 deletions(-)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 84fb694..f987006 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -121,19 +121,26 @@ register_standard_option('pve-snapshot-name', {
 });
 
 my $format_list = {};
+my $format_validators = {};
 
 sub register_format {
-my ($format, $code) = @_;
+my ($name, $format, $validator) = @_;
 
-die "JSON schema format '$format' already registered\n"
-   if $format_list->{$format};
+die "JSON schema format '$name' already registered\n"
+   if $format_list->{$name};
 
-$format_list->{$format} = $code;
+if ($validator) {
+   die "A \$validator function can only be specified for hash-based 
formats\n"
+   if ref($format) ne 'HASH';
+   $format_validators->{$name} = $validator;
+}
+
+$format_list->{$name} = $format;
 }
 
 sub get_format {
-my ($format) = @_;
-return $format_list->{$format};
+my ($name) = @_;
+return $format_list->{$name};
 }
 
 my $renderer_hash = {};
@@ -647,39 +654,47 @@ sub pve_verify_tfa_secret {
 sub check_format {
 my ($format, $value, $path) = @_;
 
-return parse_property_string($format, $value, $path) if ref($format) eq 
'HASH';
-return if $format eq 'regex';
+if (ref($format) eq 'HASH') {
+   # hash ref cannot have validator/list/opt handling attached
+   return parse_property_string($format, $value, $path);
+}
 
-if ($format =~ m/^(.*)-a?list$/) {
+if (ref($format) eq 'CODE') {
+   # we are the (sole, old-style) validator
+   return $format->($value);
+}
+
+return if $format eq 'regex';
 
-   my $code = $format_list->{$1};
+my $parsed;
+$format =~ m/^(.*?)(?:-a?(list|opt))?$/;
+my ($format_name, $format_type) = ($1, $2 // 'none');
+my $registered = get_format($format_name);
+die "undefined format '$format'\n" if !$registered;
 
-   die "undefined format '$format'\n" if !$code;
+die "'-$format_type' format must have code ref, not hash\n"
+   if $format_type ne 'none' && ref($registered) ne 'CODE';
 
+if ($format_type eq 'list') {
# Note: we allow empty lists
foreach my $v (split_list($value)) {
-   &$code($v);
+   $parsed = $registered->($v);
}
-
-} elsif ($format =~ m/^(.*)-opt$/) {
-
-   my $code = $format_list->{$1};
-
-   die "undefined format '$format'\n" if !$code;
-
-   return if !$value; # allow empty string
-
-   &$code($value);
-
+} elsif ($format_type eq 'opt') {
+   $parsed = $registered->($value) if $value;
} else {
-
-   my $code = $format_list->{$format};
-
-   die "undefined format '$format'\n" if !$code;
-
-   return parse_property_string($code, $value, $path) if ref($code) eq 
'HASH';
-   &$code($value);
+   if (ref($registered) eq 'HASH') {
+   # Note: this is the only case where a validator function could be
+   # attached, hence it's safe to handle that in parse_property_string.
+   # We do however have to call it with $format_name instead of
+   # $registered, so it knows about the name (and thus any validators).
+   $parsed = parse_property_string($format, $value, $path);
+   } else {
+   $parsed = $registered->($value);
+   }
 }
+
+return $parsed;
 }
 
 sub parse_size {
@@ -735,9 +750,16 @@ sub parse_property_string {
 $additional_properties = 0 if !defined($additional_properties);
 
 # Support named formats here, too:
+my $validator;
 if (!ref($format)) {
-   if (my $desc = $format_list->{$format}) {
-   $format = $desc;
+   if (my $reg = get_format($format)) {
+   die "parse_property_string only accepts hash based named formats\n"
+   if ref($reg) ne 'HASH';
+
+ 

[pve-devel] [PATCH v2 qemu-server 2/2] fix #2671: include CPU format in man page again

2020-06-17 Thread Stefan Reiter
Use the new register_format(3) call to use a validator (instead of a
parser) for 'pve-(vm-)?cpu-conf'. This way the $cpu_fmt hash can be used for
generating the documentation, while still applying the same verification
rules as before.

Since the function no longer parses but only verifies, the parsing in
print_cpu_device/get_cpu_options has to go via JSONSchema directly.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm | 56 -
 1 file changed, 18 insertions(+), 38 deletions(-)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index b884498..4eeb589 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -153,6 +153,7 @@ my $cpu_fmt = {
 'phys-bits' => {
type => 'string',
format => 'pve-phys-bits',
+   format_description => '8-64|host',
description => "The physical memory address bits that are reported to"
 . " the guest OS. Should be smaller or equal to the 
host's."
 . " Set to 'host' to use value from host CPU, but note 
that"
@@ -182,57 +183,36 @@ sub parse_phys_bits {
 
 # $cpu_fmt describes both the CPU config passed as part of a VM config, as well
 # as the definition of a custom CPU model. There are some slight differences
-# though, which we catch in the custom verification function below.
-PVE::JSONSchema::register_format('pve-cpu-conf', \_cpu_conf_basic);
-sub parse_cpu_conf_basic {
-my ($cpu_str, $noerr) = @_;
-
-my $cpu = eval { PVE::JSONSchema::parse_property_string($cpu_fmt, 
$cpu_str) };
-if ($@) {
-die $@ if !$noerr;
-return undef;
-}
+# though, which we catch in the custom validation functions below.
+PVE::JSONSchema::register_format('pve-cpu-conf', $cpu_fmt, 
\_cpu_conf);
+sub validate_cpu_conf {
+my ($cpu) = @_;
 
 # required, but can't be forced in schema since it's encoded in section
 # header for custom models
-if (!$cpu->{cputype}) {
-   die "CPU is missing cputype\n" if !$noerr;
-   return undef;
-}
-
-return $cpu;
+die "CPU is missing cputype\n" if !$cpu->{cputype};
 }
+PVE::JSONSchema::register_format('pve-vm-cpu-conf', $cpu_fmt, 
\_vm_cpu_conf);
+sub validate_vm_cpu_conf {
+my ($cpu) = @_;
 
-PVE::JSONSchema::register_format('pve-vm-cpu-conf', \_vm_cpu_conf);
-sub parse_vm_cpu_conf {
-my ($cpu_str, $noerr) = @_;
-
-my $cpu = parse_cpu_conf_basic($cpu_str, $noerr);
-return undef if !$cpu;
+validate_cpu_conf($cpu);
 
 my $cputype = $cpu->{cputype};
 
 # a VM-specific config is only valid if the cputype exists
 if (is_custom_model($cputype)) {
-   eval { get_custom_model($cputype); };
-   if ($@) {
-   die $@ if !$noerr;
-   return undef;
-   }
+   # dies on unknown model
+   get_custom_model($cputype);
 } else {
-   if (!defined($cpu_vendor_list->{$cputype})) {
-   die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
prefix?)\n" if !$noerr;
-   return undef;
-   }
+   die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
prefix?)\n"
+   if !defined($cpu_vendor_list->{$cputype});
 }
 
 # in a VM-specific config, certain properties are limited/forbidden
 
-if ($cpu->{flags} && $cpu->{flags} !~ 
m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/) {
-   die "VM-specific CPU flags must be a subset of: @{[join(', ', 
@supported_cpu_flags)]}\n"
-   if !$noerr;
-   return undef;
-}
+die "VM-specific CPU flags must be a subset of: @{[join(', ', 
@supported_cpu_flags)]}\n"
+   if ($cpu->{flags} && $cpu->{flags} !~ 
m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/);
 
 die "Property 'reported-model' not allowed in VM-specific CPU config.\n"
if defined($cpu->{'reported-model'});
@@ -359,7 +339,7 @@ sub print_cpu_device {
 my $kvm = $conf->{kvm} // 1;
 my $cpu = $kvm ? "kvm64" : "qemu64";
 if (my $cputype = $conf->{cpu}) {
-   my $cpuconf = parse_cpu_conf_basic($cputype)
+   my $cpuconf = PVE::JSONSchema::parse_property_string('pve-vm-cpu-conf', 
$cputype)
or die "Cannot parse cpu description: $cputype\n";
$cpu = $cpuconf->{cputype};
 
@@ -471,7 +451,7 @@ sub get_cpu_options {
 my $custom_cpu;
 my $hv_vendor_id;
 if (my $cpu_prop_str = $conf->{cpu}) {
-   $cpu = parse_vm_cpu_conf($cpu_prop_str)
+   $cpu = PVE::JSONSchema::parse_property_string('pve-vm-cpu-conf', 
$cpu_prop_str)
or die "Cannot parse cpu description: $cpu_prop_str\n";
 
$cputype = $cpu->{cputype};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 0/2] Add format validator support

2020-06-17 Thread Stefan Reiter
Version 2 of an older series, fixes "-cpu" format documentation in man pages by
introducing format validators, which allow custom validation functions to be
used together with format hashes.

The plan is to change occurances of hash based formats in parse_property_string
to named formats, so that future validators would be called. Such validators
would enable many formats which currently have no representation in the docs to
be included there.

v1 -> v2:
* Do validation in parse_property_string
* Re-refactor check_format based on Fabian's review
* Validate "cputype" in "pve-cpu-conf", not "pve-vm-cpu-conf"
* Use parse_property_string instead of check_format for parsing in CPUConfig


common: Stefan Reiter (1):
  JSONSchema: add format validator support and cleanup check_format

 src/PVE/JSONSchema.pm | 87 +++
 1 file changed, 55 insertions(+), 32 deletions(-)

qemu-server: Stefan Reiter (1):
  fix #2671: include CPU format in man page again

 PVE/QemuServer/CPUConfig.pm | 56 -
 1 file changed, 18 insertions(+), 38 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 manager 3/6] api: register /nodes/X/cpu call for CPU models

2020-06-17 Thread Stefan Reiter

On 6/17/20 3:12 PM, Thomas Lamprecht wrote:

Am 6/10/20 um 3:40 PM schrieb Stefan Reiter:

Ping on this? The API part in qemu-server was already applied, but it's not 
callable without this (and not very useful without the GUI patches).


what about the replues to 2/6 - I though some api paths are moving or the like?



That was just about the "links => " part, which will be fixed in my 
upcoming API patches. It has nothing to do with the remaining GUI code.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 manager 3/6] api: register /nodes/X/cpu call for CPU models

2020-06-10 Thread Stefan Reiter
Ping on this? The API part in qemu-server was already applied, but it's 
not callable without this (and not very useful without the GUI patches).


On 5/4/20 12:58 PM, Stefan Reiter wrote:

Signed-off-by: Stefan Reiter 
---

Depends on updated qemu-server.

  PVE/API2/Nodes.pm | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 58497b2b..2ac838ea 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -34,6 +34,7 @@ use PVE::API2::Tasks;
  use PVE::API2::Scan;
  use PVE::API2::Storage::Status;
  use PVE::API2::Qemu;
+use PVE::API2::Qemu::CPU;
  use PVE::API2::LXC;
  use PVE::API2::LXC::Status;
  use PVE::API2::VZDump;
@@ -65,6 +66,11 @@ __PACKAGE__->register_method ({
  path => 'qemu',
  });
  
+__PACKAGE__->register_method ({

+subclass => "PVE::API2::Qemu::CPU",
+path => 'cpu',
+});
+
  __PACKAGE__->register_method ({
  subclass => "PVE::API2::LXC",
  path => 'lxc',
@@ -241,6 +247,7 @@ __PACKAGE__->register_method ({
{ name => 'certificates' },
{ name => 'config' },
{ name => 'hosts' },
+   { name => 'cpu' },
];
  
  	push @$result, { name => 'sdn' } if $have_sdn;




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu 4/4] savevm-async: add debug timing prints

2020-05-27 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Doesn't have to be applied, but I thought I'd send it along anyway, since it
helped me test patch 3 greatly.

 savevm-async.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/savevm-async.c b/savevm-async.c
index 4ce83a0691..8848884593 100644
--- a/savevm-async.c
+++ b/savevm-async.c
@@ -202,6 +202,8 @@ static void process_savevm_finalize(void *opaque)
 AioContext *iohandler_ctx = iohandler_get_aio_context();
 MigrationState *ms = migrate_get_current();
 
+int64_t start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+
 qemu_bh_delete(snap_state.finalize_bh);
 snap_state.finalize_bh = NULL;
 snap_state.co = NULL;
@@ -226,6 +228,8 @@ static void process_savevm_finalize(void *opaque)
 }
 
 DPRINTF("state saving complete\n");
+DPRINTF("timing: process_savevm_finalize (state saving) took %ld ms\n",
+qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - start_time);
 
 /* clear migration state */
 migrate_set_state(>state, MIGRATION_STATUS_SETUP,
@@ -247,6 +251,9 @@ static void process_savevm_finalize(void *opaque)
 vm_start();
 snap_state.saved_vm_running = false;
 }
+
+DPRINTF("timing: process_savevm_finalize (full) took %ld ms\n",
+qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - start_time);
 }
 
 static void coroutine_fn process_savevm_co(void *opaque)
@@ -256,6 +263,8 @@ static void coroutine_fn process_savevm_co(void *opaque)
 BdrvNextIterator it;
 BlockDriverState *bs = NULL;
 
+int64_t start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+
 ret = qemu_file_get_error(snap_state.file);
 if (ret < 0) {
 save_snapshot_error("qemu_savevm_state_setup failed");
@@ -290,11 +299,15 @@ static void coroutine_fn process_savevm_co(void *opaque)
 }
 }
 
+DPRINTF("timing: process_savevm_co took %ld ms\n",
+qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - start_time);
+
 /* If a drive runs in an IOThread we can flush it async, and only
  * need to sync-flush whatever IO happens between now and
  * vm_stop_force_state. bdrv_next can only be called from main AioContext,
  * so move there now and after every flush.
  */
+int64_t start_time_flush = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 aio_co_reschedule_self(qemu_get_aio_context());
 for (bs = bdrv_first(); bs; bs = bdrv_next()) {
 /* target has BDRV_O_NO_FLUSH, no sense calling bdrv_flush on it */
@@ -311,6 +324,9 @@ static void coroutine_fn process_savevm_co(void *opaque)
 }
 }
 
+DPRINTF("timing: async flushing took %ld ms\n",
+qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - start_time_flush);
+
 qemu_bh_schedule(snap_state.finalize_bh);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu 2/4] util/async: Add aio_co_reschedule_self()

2020-05-27 Thread Stefan Reiter
From: Kevin Wolf 

From: Kevin Wolf 

Add a function that can be used to move the currently running coroutine
to a different AioContext (and therefore potentially a different
thread).

Signed-off-by: Kevin Wolf 
---

Required for patch 3. See this discussion on the QEMU mailing list:
https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg07421.html

 include/block/aio.h | 10 ++
 util/async.c| 30 ++
 2 files changed, 40 insertions(+)

diff --git a/include/block/aio.h b/include/block/aio.h
index 62ed954344..d5399c67d6 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -17,6 +17,7 @@
 #ifdef CONFIG_LINUX_IO_URING
 #include 
 #endif
+#include "qemu/coroutine.h"
 #include "qemu/queue.h"
 #include "qemu/event_notifier.h"
 #include "qemu/thread.h"
@@ -654,6 +655,15 @@ static inline bool aio_node_check(AioContext *ctx, bool 
is_external)
  */
 void aio_co_schedule(AioContext *ctx, struct Coroutine *co);
 
+/**
+ * aio_co_reschedule_self:
+ * @new_ctx: the new context
+ *
+ * Move the currently running coroutine to new_ctx. If the coroutine is already
+ * running in new_ctx, do nothing.
+ */
+void coroutine_fn aio_co_reschedule_self(AioContext *new_ctx);
+
 /**
  * aio_co_wake:
  * @co: the coroutine
diff --git a/util/async.c b/util/async.c
index 3165a28f2f..4eba1e6f1b 100644
--- a/util/async.c
+++ b/util/async.c
@@ -558,6 +558,36 @@ void aio_co_schedule(AioContext *ctx, Coroutine *co)
 aio_context_unref(ctx);
 }
 
+typedef struct AioCoRescheduleSelf {
+Coroutine *co;
+AioContext *new_ctx;
+} AioCoRescheduleSelf;
+
+static void aio_co_reschedule_self_bh(void *opaque)
+{
+AioCoRescheduleSelf *data = opaque;
+aio_co_schedule(data->new_ctx, data->co);
+}
+
+void coroutine_fn aio_co_reschedule_self(AioContext *new_ctx)
+{
+AioContext *old_ctx = qemu_get_current_aio_context();
+
+if (old_ctx != new_ctx) {
+AioCoRescheduleSelf data = {
+.co = qemu_coroutine_self(),
+.new_ctx = new_ctx,
+};
+/*
+ * We can't directly schedule the coroutine in the target context
+ * because this would be racy: The other thread could try to enter the
+ * coroutine before it has yielded in this one.
+ */
+aio_bh_schedule_oneshot(old_ctx, aio_co_reschedule_self_bh, );
+qemu_coroutine_yield();
+}
+}
+
 void aio_co_wake(struct Coroutine *co)
 {
 AioContext *ctx;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu 3/4] savevm-async: flush IOThread-drives async before entering blocking part

2020-05-27 Thread Stefan Reiter
By flushing all drives where its possible to so before entering the
blocking part (where the VM is stopped), we can reduce the time spent in
said part for every disk that has an IOThread (other drives cannot be
flushed async anyway).

Signed-off-by: Stefan Reiter 
---

It's a bit hard to get benchmark numbers here, since snapshot timings tend to
vary greatly for me. But with some very unscientific testing using patch 4 and
some gut feeling, on a VM running 'stress-ng -d 4' I'd say it shaves off about
2-3 seconds of VM downtime on average. I've never seen it produce worse results
than without it though, even for idle VMs.

 savevm-async.c | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/savevm-async.c b/savevm-async.c
index 2894c94233..4ce83a0691 100644
--- a/savevm-async.c
+++ b/savevm-async.c
@@ -253,6 +253,8 @@ static void coroutine_fn process_savevm_co(void *opaque)
 {
 int ret;
 int64_t maxlen;
+BdrvNextIterator it;
+BlockDriverState *bs = NULL;
 
 ret = qemu_file_get_error(snap_state.file);
 if (ret < 0) {
@@ -288,6 +290,27 @@ static void coroutine_fn process_savevm_co(void *opaque)
 }
 }
 
+/* If a drive runs in an IOThread we can flush it async, and only
+ * need to sync-flush whatever IO happens between now and
+ * vm_stop_force_state. bdrv_next can only be called from main AioContext,
+ * so move there now and after every flush.
+ */
+aio_co_reschedule_self(qemu_get_aio_context());
+for (bs = bdrv_first(); bs; bs = bdrv_next()) {
+/* target has BDRV_O_NO_FLUSH, no sense calling bdrv_flush on it */
+if (bs == blk_bs(snap_state.target)) {
+continue;
+}
+
+AioContext *bs_ctx = bdrv_get_aio_context(bs);
+if (bs_ctx != qemu_get_aio_context()) {
+DPRINTF("savevm: async flushing drive %s\n", bs->filename);
+aio_co_reschedule_self(bs_ctx);
+bdrv_flush(bs);
+aio_co_reschedule_self(qemu_get_aio_context());
+}
+}
+
 qemu_bh_schedule(snap_state.finalize_bh);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu 1/4] savevm-async: move more code to cleanup and rename to finalize

2020-05-27 Thread Stefan Reiter
process_savevm_cleanup is renamed to process_savevm_finalize to
accomodate more code that is not all cleanup related.

The benefit of this is that it allows us to call functions which need to
run in the main AIOContext directly. It doesn't majorly affect snapshot
performance, since the first instruction that is moved stops the VM,
so the downtime stays about the same.

The target bdrv is additionally moved to the IOHandler context before
process_savevm_co to make sure the coroutine can call functions that
require it to own the bdrv's context. process_savevm_finalize then moves
it back to the main context to run its part.

Signed-off-by: Stefan Reiter 
---

Can be applied standalone.

 savevm-async.c | 87 +-
 1 file changed, 51 insertions(+), 36 deletions(-)

diff --git a/savevm-async.c b/savevm-async.c
index c3fe741c38..2894c94233 100644
--- a/savevm-async.c
+++ b/savevm-async.c
@@ -50,7 +50,7 @@ static struct SnapshotState {
 int saved_vm_running;
 QEMUFile *file;
 int64_t total_time;
-QEMUBH *cleanup_bh;
+QEMUBH *finalize_bh;
 Coroutine *co;
 } snap_state;
 
@@ -196,12 +196,42 @@ static const QEMUFileOps block_file_ops = {
 .close =  block_state_close,
 };
 
-static void process_savevm_cleanup(void *opaque)
+static void process_savevm_finalize(void *opaque)
 {
 int ret;
-qemu_bh_delete(snap_state.cleanup_bh);
-snap_state.cleanup_bh = NULL;
+AioContext *iohandler_ctx = iohandler_get_aio_context();
+MigrationState *ms = migrate_get_current();
+
+qemu_bh_delete(snap_state.finalize_bh);
+snap_state.finalize_bh = NULL;
 snap_state.co = NULL;
+
+/* We need to own the target bdrv's context for the following functions,
+ * so move it back. It can stay in the main context and live out its live
+ * there, since we're done with it after this method ends anyway.
+ */
+aio_context_acquire(iohandler_ctx);
+blk_set_aio_context(snap_state.target, qemu_get_aio_context(), NULL);
+aio_context_release(iohandler_ctx);
+
+ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
+if (ret < 0) {
+save_snapshot_error("vm_stop_force_state error %d", ret);
+}
+
+(void)qemu_savevm_state_complete_precopy(snap_state.file, false, false);
+ret = qemu_file_get_error(snap_state.file);
+if (ret < 0) {
+save_snapshot_error("qemu_savevm_state_iterate error %d", ret);
+}
+
+DPRINTF("state saving complete\n");
+
+/* clear migration state */
+migrate_set_state(>state, MIGRATION_STATUS_SETUP,
+  ret ? MIGRATION_STATUS_FAILED : 
MIGRATION_STATUS_COMPLETED);
+ms->to_dst_file = NULL;
+
 qemu_savevm_state_cleanup();
 
 ret = save_snapshot_cleanup();
@@ -219,16 +249,15 @@ static void process_savevm_cleanup(void *opaque)
 }
 }
 
-static void process_savevm_coro(void *opaque)
+static void coroutine_fn process_savevm_co(void *opaque)
 {
 int ret;
 int64_t maxlen;
-MigrationState *ms = migrate_get_current();
 
 ret = qemu_file_get_error(snap_state.file);
 if (ret < 0) {
 save_snapshot_error("qemu_savevm_state_setup failed");
-goto out;
+return;
 }
 
 while (snap_state.state == SAVE_STATE_ACTIVE) {
@@ -245,7 +274,7 @@ static void process_savevm_coro(void *opaque)
 save_snapshot_error("qemu_savevm_state_iterate error %d", ret);
 break;
 }
-DPRINTF("savevm inerate pending size %lu ret %d\n", pending_size, 
ret);
+DPRINTF("savevm iterate pending size %lu ret %d\n", pending_size, 
ret);
 } else {
 qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
 ret = global_state_store();
@@ -253,40 +282,20 @@ static void process_savevm_coro(void *opaque)
 save_snapshot_error("global_state_store error %d", ret);
 break;
 }
-ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
-if (ret < 0) {
-save_snapshot_error("vm_stop_force_state error %d", ret);
-break;
-}
-DPRINTF("savevm inerate finished\n");
-/* upstream made the return value here inconsistent
- * (-1 instead of 'ret' in one case and 0 after flush which can
- * still set a file error...)
- */
-(void)qemu_savevm_state_complete_precopy(snap_state.file, false, 
false);
-ret = qemu_file_get_error(snap_state.file);
-if (ret < 0) {
-save_snapshot_error("qemu_savevm_state_iterate error %d", 
ret);
-break;
-}
-DPRINTF("save complete\n");
+
+DPRINTF("savevm iterate complete\n");
 b

[pve-devel] [PATCH qemu 0/4] Fix vmstate-snapshots w/ iothread=1

2020-05-27 Thread Stefan Reiter
Once again, iothreads making trouble. When enabled, snapshots including RAM
deadlock QEMU, because our async-snapshot implementation (which recently moved
back to using coroutines) tries to access and modify the state of disks running
in seperate iothreads from the main one.

Patch 1/4 fixes the issue and can be applied standalone, patches 2 and 3 improve
snapshot performance for iothread-disks and patch 4 adds some useful debug
prints for testing the aforementioned performance patches. See individual patch
notes for more.

For easier reviewing I sent the patches for the QEMU source itself, if necessary
I can also apply them and then send pve-qemu patches including them as .patch
files.

Kevin Wolf (1):
  util/async: Add aio_co_reschedule_self()

Stefan Reiter (3):
  savevm-async: move more code to cleanup and rename to finalize
  savevm-async: flush IOThread-drives async before entering blocking
part
  savevm-async: add debug timing prints

 include/block/aio.h |  10 
 savevm-async.c  | 124 +++-
 util/async.c|  30 +++
 3 files changed, 129 insertions(+), 35 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/2] add support for nvme emulation

2020-05-13 Thread Stefan Reiter

Gave both patches a spin and they work fine on first glance.

We should probably assign a fixed PCI bus/addr to the NVMe devices 
though (same as we do for SCSI and AHCI hardware with print_pci_addr 
somewhere in the depths of config_to_command).


On 5/13/20 5:36 PM, Oguz Bektas wrote:

now we can add nvme drives;

nvme0: local-lvm:vm-103-disk-0,size=32G

max number is 8

Signed-off-by: Oguz Bektas 
---
  PVE/QemuServer.pm   | 20 +---
  PVE/QemuServer/Drive.pm | 21 +
  2 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dcf05df..441d209 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -406,7 +406,7 @@ EODESC
optional => 1,
type => 'string', format => 'pve-qm-bootdisk',
description => "Enable booting from specified disk.",
-   pattern => '(ide|sata|scsi|virtio)\d+',
+   pattern => '(ide|sata|scsi|virtio|nvme)\d+',
  },
  smp => {
optional => 1,
@@ -1424,7 +1424,11 @@ sub print_drivedevice_full {
$device .= ",rotation_rate=1";
}
$device .= ",wwn=$drive->{wwn}" if $drive->{wwn};
-
+} elsif ($drive->{interface} eq 'nvme') {
+   my $maxdev = $PVE::QemuServer::Drive::MAX_NVME_DISKS;


$maxdev is not used anywhere?


+   my $path = $drive->{file};
+   $drive->{serial} = "$drive->{interface}$drive->{index}"; # serial is 
mandatory for nvme
+   $device = "nvme,drive=drive-$drive->{interface}$drive->{index}";
  } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
my $maxdev = ($drive->{interface} eq 'sata') ? 
$PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
my $controller = int($drive->{index} / $maxdev);

>
> [...]

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH v2 qemu-server 2/6] api: allow listing custom and default CPU models

2020-05-06 Thread Stefan Reiter

On 5/6/20 4:52 PM, Thomas Lamprecht wrote:

On 5/4/20 12:58 PM, Stefan Reiter wrote:

More API calls will follow for this path, for now add the 'index' call to
list all custom and default CPU models.

Any user can list the default CPU models, as these are public anyway, but
custom models are restricted to users with Sys.Audit on /nodes.

Signed-off-by: Stefan Reiter 
---
  PVE/API2/Qemu/CPU.pm| 61 +
  PVE/API2/Qemu/Makefile  |  2 +-
  PVE/QemuServer/CPUConfig.pm | 30 ++
  3 files changed, 92 insertions(+), 1 deletion(-)
  create mode 100644 PVE/API2/Qemu/CPU.pm



applied, thanks!

Albeit API call shouldn't say that it's return values are links to API sub
directory/paths if they ain't..



Ah sorry, I removed this locally but must've forgotten to check it in.

It's probably best to remove this entirely for now, it's not just wrong 
now but also wrong wrt. my upcoming changes... (I put the individual 
queries under .../cpu/model/ , so I could also have 
.../cpu/supported-flags etc. without mixing dynamic and static paths)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] cfg2cmd: fix uninitialized value warning on OVMF w/o efidisk

2020-05-06 Thread Stefan Reiter
It's possible to have a VM with OVMF but without an efidisk, so don't
call parse_drive on a potential undef value.

Partial revert of 818c3b8d91 ("cfg2cmd: ovmf: code cleanup")

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e9b094b..e76aee3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3084,7 +3084,8 @@ sub config_to_command {
die "uefi base image '$ovmf_code' not found\n" if ! -f $ovmf_code;
 
my ($path, $format);
-   if (my $d = parse_drive('efidisk0', $conf->{efidisk0})) {
+   if (my $efidisk = $conf->{efidisk0}) {
+   my $d = parse_drive('efidisk0', $efidisk);
my ($storeid, $volname) = PVE::Storage::parse_volume_id($d->{file}, 
1);
$format = $d->{format};
if ($storeid) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] gui: never collapse notes for templates

2020-05-06 Thread Stefan Reiter
There's no graphs on screen, so no reason to collapse the notes to save
space. Besides, it looked a bit funky expanding the notes on smaller
screens, since they always jumped to the bottom to fill the space...

Signed-off-by: Stefan Reiter 
---
 www/manager6/panel/NotesView.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/panel/NotesView.js b/www/manager6/panel/NotesView.js
index edf38e5d..90de7b5a 100644
--- a/www/manager6/panel/NotesView.js
+++ b/www/manager6/panel/NotesView.js
@@ -105,7 +105,7 @@ Ext.define('PVE.panel.NotesView', {
me.callParent();
if (type === 'node') {
me.down('#tbar').setVisible(true);
-   } else {
+   } else if (me.pveSelNode.data.template !== 1) {
me.setCollapsible(true);
me.collapseDirection = 'right';
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server] rng: die when trying to pass through disconnected hwrng

2020-05-05 Thread Stefan Reiter
If /dev/hwrng exists, but no actual generator is connected (or it is
disabled on the host), QEMU will happily start the VM but crash as soon
as the guest accesses the VirtIO RNG device.

To prevent this unfortunate behaviour, check if a useable hwrng is
connected to the host before allowing the VM to be started.

While at it, clean up config_to_command by moving new and existing rng
source checks to a seperate sub.

Signed-off-by: Stefan Reiter 
---

v2: Move to sub, clean up, extend comment

 PVE/QemuServer.pm | 26 --
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index cb96b71..e9b094b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3387,20 +3387,16 @@ sub config_to_command {
 
 my $rng = parse_rng($conf->{rng0}) if $conf->{rng0};
 if ($rng && &$version_guard(4, 1, 2)) {
+   check_rng_source($rng->{source});
+
my $max_bytes = $rng->{max_bytes} // $rng_fmt->{max_bytes}->{default};
my $period = $rng->{period} // $rng_fmt->{period}->{default};
-
my $limiter_str = "";
if ($max_bytes) {
$limiter_str = ",max-bytes=$max_bytes,period=$period";
}
 
-   # mostly relevant for /dev/hwrng, but doesn't hurt to check others too
-   die "cannot create VirtIO RNG device: source file '$rng->{source}' 
doesn't exist\n"
-   if ! -e $rng->{source};
-
my $rng_addr = print_pci_addr("rng0", $bridges, $arch, $machine_type);
-
push @$devices, '-object', "rng-random,filename=$rng->{source},id=rng0";
push @$devices, '-device', 
"virtio-rng-pci,rng=rng0$limiter_str$rng_addr";
 }
@@ -3634,6 +3630,24 @@ sub config_to_command {
 return wantarray ? ($cmd, $vollist, $spice_port) : $cmd;
 }
 
+sub check_rng_source {
+my ($source) = @_;
+
+# mostly relevant for /dev/hwrng, but doesn't hurt to check others too
+die "cannot create VirtIO RNG device: source file '$source' doesn't 
exist\n"
+   if ! -e $source;
+
+my $rng_current = '/sys/devices/virtual/misc/hw_random/rng_current';
+if ($source eq '/dev/hwrng' && file_read_firstline($rng_current) eq 
'none') {
+   # Needs to abort, otherwise QEMU crashes on first rng access.
+   # Note that rng_current cannot be changed to 'none' manually, so
+   # once the VM is past this point, it is no longer an issue.
+   die "Cannot start VM with passed-through RNG device: '/dev/hwrng'"
+   . " exists, but '$rng_current' is set to 'none'. Ensure that"
+   . " a compatible hardware-RNG is attached to the host.\n";
+}
+}
+
 sub spice_port {
 my ($vmid) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] qemu/rng: die when trying to pass through disconnected hwrng

2020-05-05 Thread Stefan Reiter
If /dev/hwrng exists, but no actual generator is connected (or it is
disabled on the host), QEMU will happily start the VM but crash as soon
as the guest accesses the VirtIO RNG device.

To prevent this unfortunate behaviour, check if a useable hwrng is
connected to the host before allowing the VM to be started.

Signed-off-by: Stefan Reiter 
---

On a side note, 'file_read_firstline' was already imported from PVE::Tools but
never used. Saves a line in this patch I supposed ;)

 PVE/QemuServer.pm | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index cb96b71..6faa9cf 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3399,6 +3399,16 @@ sub config_to_command {
die "cannot create VirtIO RNG device: source file '$rng->{source}' 
doesn't exist\n"
if ! -e $rng->{source};
 
+   my $rng_select_path = '/sys/devices/virtual/misc/hw_random/rng_current';
+   if ($rng->{source} eq '/dev/hwrng' &&
+   file_read_firstline($rng_select_path) eq 'none')
+   {
+   # needs to abort, otherwise QEMU crashes on first rng access
+   die "Cannot start VM with passed-through RNG device: '/dev/hwrng'"
+ . " exists, but '$rng_select_path' is set to 'none'. Ensure that"
+ . " a compatible hardware-RNG is attached to the host.\n";
+   }
+
my $rng_addr = print_pci_addr("rng0", $bridges, $arch, $machine_type);
 
push @$devices, '-object', "rng-random,filename=$rng->{source},id=rng0";
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH common] fix #2696: avoid 'undefined value' warning in 'pvesh help'

2020-05-04 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 src/PVE/CLIHandler.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/CLIHandler.pm b/src/PVE/CLIHandler.pm
index 763cd60..9955d77 100644
--- a/src/PVE/CLIHandler.pm
+++ b/src/PVE/CLIHandler.pm
@@ -235,8 +235,8 @@ sub generate_usage_str {
 
}
} else {
+   $abort->("unknown command '$cmd->[0]'") if !$def;
my ($class, $name, $arg_param, $fixed_param, undef, 
$formatter_properties) = @$def;
-   $abort->("unknown command '$cmd'") if !$class;
 
$str .= $indent;
$str .= $class->usage_str($name, $prefix, $arg_param, $fixed_param, 
$format, $param_cb, $formatter_properties);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 5/6] ui: CPUModelSelector: use API call for store

2020-05-04 Thread Stefan Reiter
CPU models are retrieved from the new /nodes/X/cpu call and ordered by
vendor to approximate the previous sort order (less change for accustomed
users).

With this, custom CPU models are now selectable via the GUI.

Signed-off-by: Stefan Reiter 
---

v2:
* Put vendor map and order map into PVE.Utils
* Add gettext for 'Custom'

I thought about the sorting method with the 'calculated' field Dominik
suggested, but I felt it just made the code a bit more confusing (since it
splits the ordering stuff across the file). Left it as is for now.

 www/manager6/Utils.js |  14 ++
 www/manager6/form/CPUModelSelector.js | 209 +-
 2 files changed, 83 insertions(+), 140 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 872b7c29..4301f00e 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1454,6 +1454,20 @@ Ext.define('PVE.Utils', { utilities: {
}
});
 },
+
+cpu_vendor_map: {
+   'default': 'QEMU',
+   'AuthenticAMD': 'AMD',
+   'GenuineIntel': 'Intel'
+},
+
+cpu_vendor_order: {
+   "AMD": 1,
+   "Intel": 2,
+   "QEMU": 3,
+   "Host": 4,
+   "_default_": 5, // includes custom models
+},
 },
 
 singleton: true,
diff --git a/www/manager6/form/CPUModelSelector.js 
b/www/manager6/form/CPUModelSelector.js
index 1d28ee88..deef23fb 100644
--- a/www/manager6/form/CPUModelSelector.js
+++ b/www/manager6/form/CPUModelSelector.js
@@ -1,9 +1,19 @@
+Ext.define('PVE.data.CPUModel', {
+extend: 'Ext.data.Model',
+fields: [
+   {name: 'name'},
+   {name: 'vendor'},
+   {name: 'custom'},
+   {name: 'displayname'}
+]
+});
+
 Ext.define('PVE.form.CPUModelSelector', {
 extend: 'Proxmox.form.ComboGrid',
 alias: ['widget.CPUModelSelector'],
 
-valueField: 'value',
-displayField: 'value',
+valueField: 'name',
+displayField: 'displayname',
 
 emptyText: Proxmox.Utils.defaultText + ' (kvm64)',
 allowBlank: true,
@@ -19,157 +29,76 @@ Ext.define('PVE.form.CPUModelSelector', {
columns: [
{
header: gettext('Model'),
-   dataIndex: 'value',
+   dataIndex: 'displayname',
hideable: false,
sortable: true,
-   flex: 2
+   flex: 3
},
{
header: gettext('Vendor'),
dataIndex: 'vendor',
hideable: false,
sortable: true,
-   flex: 1
+   flex: 2
}
],
-   width: 320
+   width: 360
 },
 
 store: {
-   fields: [ 'value', 'vendor' ],
-   data: [
-   {
-   value: 'athlon',
-   vendor: 'AMD'
-   },
-   {
-   value: 'phenom',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G1',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G2',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G3',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G4',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G5',
-   vendor: 'AMD'
-   },
-   {
-   value: 'EPYC',
-   vendor: 'AMD'
-   },
-   {
-   value: '486',
-   vendor: 'Intel'
-   },
-   {
-   value: 'core2duo',
-   vendor: 'Intel'
-   },
-   {
-   value: 'coreduo',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium2',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium3',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Conroe',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Penryn',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Nehalem',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Westmere',
-   vendor: 'Intel'
-   },
-   {
-   value: 'SandyBridge',
-   vendor: 'Intel'
-   },
-   {
-   value: 'IvyBridge',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Haswell',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Haswell-noTSX',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Broadwell',
-   vendor: 'Intel'
-   },
-   {
-   value:

[pve-devel] [PATCH v2 qemu-server 1/6] api: check Sys.Audit permissions when setting a custom CPU model

2020-05-04 Thread Stefan Reiter
Explicitly allows changing other properties than the cputype, even if
the currently set cputype is not accessible by the user. This way, an
administrator can assign a custom CPU type to a VM for a less privileged
user without breaking edit functionality for them.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm | 25 +
 1 file changed, 25 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 7ffc538..4cf0a11 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,6 +21,7 @@ use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuServer::Drive;
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuMigrate;
 use PVE::RPCEnvironment;
@@ -236,6 +237,26 @@ my $create_disks = sub {
 return $vollist;
 };
 
+my $check_cpu_model_access = sub {
+my ($rpcenv, $authuser, $new, $existing) = @_;
+
+return if !defined($new->{cpu});
+
+my $cpu = PVE::JSONSchema::check_format('pve-vm-cpu-conf', $new->{cpu});
+return if !$cpu || !$cpu->{cputype}; # always allow default
+my $cputype = $cpu->{cputype};
+
+if ($existing && $existing->{cpu}) {
+   # changing only other settings doesn't require permissions for CPU model
+   my $existingCpu = PVE::JSONSchema::check_format('pve-vm-cpu-conf', 
$existing->{cpu});
+   return if $existingCpu->{cputype} eq $cputype;
+}
+
+if (PVE::QemuServer::CPUConfig::is_custom_model($cputype)) {
+   $rpcenv->check($authuser, "/nodes", ['Sys.Audit']);
+}
+};
+
 my $cpuoptions = {
 'cores' => 1,
 'cpu' => 1,
@@ -543,6 +564,8 @@ __PACKAGE__->register_method({
 
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ 
keys %$param]);
 
+   &$check_cpu_model_access($rpcenv, $authuser, $param);
+
foreach my $opt (keys %$param) {
if (PVE::QemuServer::is_valid_drivename($opt)) {
my $drive = PVE::QemuServer::parse_drive($opt, 
$param->{$opt});
@@ -1122,6 +1145,8 @@ my $update_vm_api  = sub {
die "checksum missmatch (file change by other user?)\n"
if $digest && $digest ne $conf->{digest};
 
+   &$check_cpu_model_access($rpcenv, $authuser, $param, $conf);
+
# FIXME: 'suspended' lock should probabyl be a state or "weak" lock?!
if (scalar(@delete) && grep { $_ eq 'vmstate'} @delete) {
if (defined($conf->{lock}) && $conf->{lock} eq 'suspended') {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 0/6] Custom CPU models API/GUI basics

2020-05-04 Thread Stefan Reiter
Permission handling, the beginnings of the API and getting the GUI to play nice
with custom models (no editor yet, but it'll behave as expected if a determined
user creates a custom model by editing the config).

First 3 patches are API stuff, 4 is an independent UI fix/cleanup, rest are new
GUI stuff.

v1 -> v2:
* fix things noted in Dominik's review (see notes on patches)


qemu-server: Stefan Reiter (2):
  api: check Sys.Audit permissions when setting a custom CPU model
  api: allow listing custom and default CPU models

 PVE/API2/Qemu.pm| 25 +++
 PVE/API2/Qemu/CPU.pm| 61 +
 PVE/API2/Qemu/Makefile  |  2 +-
 PVE/QemuServer/CPUConfig.pm | 30 ++
 4 files changed, 117 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/CPU.pm

manager: Stefan Reiter (4):
  api: register /nodes/X/cpu call for CPU models
  ui: ProcessorEdit: fix total core calculation and use view model
  ui: CPUModelSelector: use API call for store
  ui: ProcessorEdit: allow modifications with inaccessible CPU model

 PVE/API2/Nodes.pm |   7 +
 www/manager6/Utils.js |  14 ++
 www/manager6/form/CPUModelSelector.js | 209 +-
 www/manager6/qemu/ProcessorEdit.js| 104 +
 4 files changed, 168 insertions(+), 166 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 qemu-server 2/6] api: allow listing custom and default CPU models

2020-05-04 Thread Stefan Reiter
More API calls will follow for this path, for now add the 'index' call to
list all custom and default CPU models.

Any user can list the default CPU models, as these are public anyway, but
custom models are restricted to users with Sys.Audit on /nodes.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu/CPU.pm| 61 +
 PVE/API2/Qemu/Makefile  |  2 +-
 PVE/QemuServer/CPUConfig.pm | 30 ++
 3 files changed, 92 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/CPU.pm

diff --git a/PVE/API2/Qemu/CPU.pm b/PVE/API2/Qemu/CPU.pm
new file mode 100644
index 000..b0bb32d
--- /dev/null
+++ b/PVE/API2/Qemu/CPU.pm
@@ -0,0 +1,61 @@
+package PVE::API2::Qemu::CPU;
+
+use strict;
+use warnings;
+
+use PVE::RESTHandler;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::QemuServer::CPUConfig;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method({
+name => 'index',
+path => '',
+method => 'GET',
+description => 'List all custom and default CPU models.',
+permissions => {
+   user => 'all',
+   description => 'Only returns custom models when the current user has'
+. ' Sys.Audit on /nodes.',
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   },
+},
+returns => {
+   type => 'array',
+   items => {
+   type => 'object',
+   properties => {
+   name => {
+   type => 'string',
+   description => "Name of the CPU model. Identifies it for"
+. " subsequent API calls. Prefixed with"
+. " 'custom-' for custom models.",
+   },
+   custom => {
+   type => 'boolean',
+   description => "True if this is a custom CPU model.",
+   },
+   vendor => {
+   type => 'string',
+   description => "CPU vendor visible to the guest when this"
+. " model is selected. Vendor of"
+. " 'reported-model' in case of custom 
models.",
+   },
+   },
+   },
+   links => [ { rel => 'child', href => '{name}' } ],
+},
+code => sub {
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+   my $include_custom = $rpcenv->check($authuser, "/nodes", ['Sys.Audit'], 
1);
+
+   return PVE::QemuServer::CPUConfig::get_cpu_models($include_custom);
+}});
+
+1;
diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index 20c2a6c..f4b7be6 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm
+SOURCES=Agent.pm CPU.pm
 
 .PHONY: install
 install:
diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 61744dc..b884498 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -293,6 +293,36 @@ sub write_config {
 $class->SUPER::write_config($filename, $cfg);
 }
 
+sub get_cpu_models {
+my ($include_custom) = @_;
+
+my $models = [];
+
+for my $default_model (keys %{$cpu_vendor_list}) {
+   push @$models, {
+   name => $default_model,
+   custom => 0,
+   vendor => $cpu_vendor_list->{$default_model},
+   };
+}
+
+return $models if !$include_custom;
+
+my $conf = load_custom_model_conf();
+for my $custom_model (keys %{$conf->{ids}}) {
+   my $reported_model = $conf->{ids}->{$custom_model}->{'reported-model'};
+   $reported_model //= $cpu_fmt->{'reported-model'}->{default};
+   my $vendor = $cpu_vendor_list->{$reported_model};
+   push @$models, {
+   name => "custom-$custom_model",
+   custom => 1,
+   vendor => $vendor,
+   };
+}
+
+return $models;
+}
+
 sub is_custom_model {
 my ($cputype) = @_;
 return $cputype =~ m/^custom-/;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 6/6] ui: ProcessorEdit: allow modifications with inaccessible CPU model

2020-05-04 Thread Stefan Reiter
An administrator can set a custom CPU model for a VM where the general user
does not have permission to use this particular model. Prior to this change
the ProcessorEdit component would be broken by this, since the store of the
CPU type selector did not contain the configured CPU model.

Add it in manually if this situation occurs (with 'Unknown' vendor, since
we cannot retrieve it from the API), but warn the user that changing it
would be an irreversible action.

Signed-off-by: Stefan Reiter 
---

v2:
* Anchor displayname-regex with ^

 www/manager6/qemu/ProcessorEdit.js | 49 ++
 1 file changed, 49 insertions(+)

diff --git a/www/manager6/qemu/ProcessorEdit.js 
b/www/manager6/qemu/ProcessorEdit.js
index f437a82c..e65631e1 100644
--- a/www/manager6/qemu/ProcessorEdit.js
+++ b/www/manager6/qemu/ProcessorEdit.js
@@ -9,6 +9,7 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
data: {
socketCount: 1,
coreCount: 1,
+   showCustomModelPermWarning: false,
},
formulas: {
totalCoreCount: get => get('socketCount') * get('coreCount'),
@@ -66,6 +67,33 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
return values;
 },
 
+setValues: function(values) {
+   let me = this;
+
+   let type = values.cputype;
+   let typeSelector = me.lookupReference('cputype');
+   let typeStore = typeSelector.getStore();
+   typeStore.on('load', (store, records, success) => {
+   if (!success || !type || records.some(x => x.data.name === type)) {
+   return;
+   }
+
+   // if we get here, a custom CPU model is selected for the VM but we
+   // don't have permission to configure it - it will not be in the
+   // list retrieved from the API, so add it manually to allow changing
+   // other processor options
+   typeStore.add({
+   name: type,
+   displayname: type.replace(/^custom-/, ''),
+   custom: 1,
+   vendor: gettext("Unknown"),
+   });
+   typeSelector.select(type);
+   });
+
+   me.callParent([values]);
+},
+
 cpu: {},
 
 column1: [
@@ -99,6 +127,7 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
{
xtype: 'CPUModelSelector',
name: 'cputype',
+   reference: 'cputype',
fieldLabel: gettext('Type')
},
{
@@ -112,6 +141,18 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
},
 ],
 
+columnB: [
+   {
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   value: gettext('WARNING: You do not have permission to configure 
custom CPU types, if you change the type you will not be able to go back!'),
+   hidden: true,
+   bind: {
+   hidden: '{!showCustomModelPermWarning}',
+   },
+   },
+],
+
 advancedColumn1: [
{
xtype: 'proxmoxintegerfield',
@@ -199,6 +240,14 @@ Ext.define('PVE.qemu.ProcessorEdit', {
if (cpu.flags) {
data.flags = cpu.flags;
}
+
+   let caps = Ext.state.Manager.get('GuiCap');
+   if (data.cputype.indexOf('custom-') === 0 &&
+   !caps.nodes['Sys.Audit'])
+   {
+   let vm = ipanel.getViewModel();
+   vm.set("showCustomModelPermWarning", true);
+   }
}
me.setValues(data);
}
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 3/6] api: register /nodes/X/cpu call for CPU models

2020-05-04 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Depends on updated qemu-server.

 PVE/API2/Nodes.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 58497b2b..2ac838ea 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -34,6 +34,7 @@ use PVE::API2::Tasks;
 use PVE::API2::Scan;
 use PVE::API2::Storage::Status;
 use PVE::API2::Qemu;
+use PVE::API2::Qemu::CPU;
 use PVE::API2::LXC;
 use PVE::API2::LXC::Status;
 use PVE::API2::VZDump;
@@ -65,6 +66,11 @@ __PACKAGE__->register_method ({
 path => 'qemu',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Qemu::CPU",
+path => 'cpu',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::LXC",
 path => 'lxc',
@@ -241,6 +247,7 @@ __PACKAGE__->register_method ({
{ name => 'certificates' },
{ name => 'config' },
{ name => 'hosts' },
+   { name => 'cpu' },
];
 
push @$result, { name => 'sdn' } if $have_sdn;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 4/6] ui: ProcessorEdit: fix total core calculation and use view model

2020-05-04 Thread Stefan Reiter
Clean up the code in ProcessorEdit with a view model and fix a bug while at
it - previously, pressing the 'Reset' button on the form would always set
the value of the total core count field to 1, so mark 'totalcores' with
'isFormField: false' to avoid reset.

Signed-off-by: Stefan Reiter 
---

v2:
* Use 'isFormField: false' and drop setValues

 www/manager6/qemu/ProcessorEdit.js | 55 --
 1 file changed, 29 insertions(+), 26 deletions(-)

diff --git a/www/manager6/qemu/ProcessorEdit.js 
b/www/manager6/qemu/ProcessorEdit.js
index bc17e152..f437a82c 100644
--- a/www/manager6/qemu/ProcessorEdit.js
+++ b/www/manager6/qemu/ProcessorEdit.js
@@ -5,28 +5,18 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
 
 insideWizard: false,
 
-controller: {
-   xclass: 'Ext.app.ViewController',
-
-   updateCores: function() {
-   var me = this.getView();
-   var sockets = me.down('field[name=sockets]').getValue();
-   var cores = me.down('field[name=cores]').getValue();
-   me.down('field[name=totalcores]').setValue(sockets*cores);
-   var vcpus = me.down('field[name=vcpus]');
-   vcpus.setMaxValue(sockets*cores);
-   vcpus.setEmptyText(sockets*cores);
-   vcpus.validate();
+viewModel: {
+   data: {
+   socketCount: 1,
+   coreCount: 1,
},
+   formulas: {
+   totalCoreCount: get => get('socketCount') * get('coreCount'),
+   },
+},
 
-   control: {
-   'field[name=sockets]': {
-   change: 'updateCores'
-   },
-   'field[name=cores]': {
-   change: 'updateCores'
-   }
-   }
+controller: {
+   xclass: 'Ext.app.ViewController',
 },
 
 onGetValues: function(values) {
@@ -86,7 +76,10 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
maxValue: 4,
value: '1',
fieldLabel: gettext('Sockets'),
-   allowBlank: false
+   allowBlank: false,
+   bind: {
+   value: '{socketCount}',
+   },
},
{
xtype: 'proxmoxintegerfield',
@@ -95,8 +88,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
maxValue: 128,
value: '1',
fieldLabel: gettext('Cores'),
-   allowBlank: false
-   }
+   allowBlank: false,
+   bind: {
+   value: '{coreCount}',
+   },
+   },
 ],
 
 column2: [
@@ -109,8 +105,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
xtype: 'displayfield',
fieldLabel: gettext('Total cores'),
name: 'totalcores',
-   value: '1'
-   }
+   isFormField: false,
+   bind: {
+   value: '{totalCoreCount}',
+   },
+   },
 ],
 
 advancedColumn1: [
@@ -123,7 +122,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
fieldLabel: gettext('VCPUs'),
deleteEmpty: true,
allowBlank: true,
-   emptyText: '1'
+   emptyText: '1',
+   bind: {
+   emptyText: '{totalCoreCount}',
+   maxValue: '{totalCoreCount}',
+   },
},
{
xtype: 'numberfield',
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu 2/2] add optional buffer size to QEMUFile

2020-05-04 Thread Stefan Reiter

On 5/4/20 12:02 PM, Wolfgang Bumiller wrote:

and use 4M for our savevm-async buffer size

Signed-off-by: Wolfgang Bumiller 
---
  ...add-optional-buffer-size-to-QEMUFile.patch | 173 ++
  debian/patches/series |   1 +
  2 files changed, 174 insertions(+)
  create mode 100644 
debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch

diff --git a/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch 
b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
new file mode 100644
index 000..d990582
--- /dev/null
+++ b/debian/patches/pve/0044-add-optional-buffer-size-to-QEMUFile.patch
@@ -0,0 +1,173 @@
+From  Mon Sep 17 00:00:00 2001
+From: Wolfgang Bumiller 
+Date: Mon, 4 May 2020 11:05:08 +0200
+Subject: [PATCH] add optional buffer size to QEMUFile
+
+So we can use a 4M buffer for savevm-async which should
+increase performance storing the state onto ceph.
+
+Signed-off-by: Wolfgang Bumiller 
+---
+ migration/qemu-file.c | 33 +
+ migration/qemu-file.h |  1 +
+ savevm-async.c|  4 ++--
+ 3 files changed, 24 insertions(+), 14 deletions(-)
+
+diff --git a/migration/qemu-file.c b/migration/qemu-file.c
+index 1c3a358a14..b595d0ba34 100644
+--- a/migration/qemu-file.c
 b/migration/qemu-file.c
+@@ -30,7 +30,7 @@
+ #include "trace.h"
+ #include "qapi/error.h"
+
+-#define IO_BUF_SIZE 32768
++#define DEFAULT_IO_BUF_SIZE 32768
+ #define MAX_IOV_SIZE MIN(IOV_MAX, 64)
+
+ struct QEMUFile {
+@@ -45,7 +45,8 @@ struct QEMUFile {
+ when reading */
+ int buf_index;
+ int buf_size; /* 0 when writing */
+-uint8_t buf[IO_BUF_SIZE];
++size_t buf_allocated_size;
++uint8_t *buf;
+
+ DECLARE_BITMAP(may_free, MAX_IOV_SIZE);
+ struct iovec iov[MAX_IOV_SIZE];
+@@ -101,7 +102,7 @@ bool qemu_file_mode_is_not_valid(const char *mode)
+ return false;
+ }
+
+-QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++QEMUFile *qemu_fopen_ops_sized(void *opaque, const QEMUFileOps *ops, size_t 
buffer_size)
+ {
+ QEMUFile *f;
+
+@@ -109,9 +110,17 @@ QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps 
*ops)
+
+ f->opaque = opaque;
+ f->ops = ops;
++f->buf_allocated_size = buffer_size;
++f->buf = malloc(buffer_size);


Does this not require an explicit 'free' somewhere? E.g. qemu_fclose?


++
+ return f;
+ }
+
++QEMUFile *qemu_fopen_ops(void *opaque, const QEMUFileOps *ops)
++{
++return qemu_fopen_ops_sized(opaque, ops, DEFAULT_IO_BUF_SIZE);
++}
++
+
+ void qemu_file_set_hooks(QEMUFile *f, const QEMUFileHooks *hooks)
+ {
+@@ -346,7 +355,7 @@ static ssize_t qemu_fill_buffer(QEMUFile *f)
+ }
+
+ len = f->ops->get_buffer(f->opaque, f->buf + pending, f->pos,
+- IO_BUF_SIZE - pending, _error);
++ f->buf_allocated_size - pending, _error);
+ if (len > 0) {
+ f->buf_size += len;
+ f->pos += len;
+@@ -435,7 +444,7 @@ static void add_buf_to_iovec(QEMUFile *f, size_t len)
+ {
+ if (!add_to_iovec(f, f->buf + f->buf_index, len, false)) {
+ f->buf_index += len;
+-if (f->buf_index == IO_BUF_SIZE) {
++if (f->buf_index == f->buf_allocated_size) {
+ qemu_fflush(f);
+ }
+ }
+@@ -461,7 +470,7 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, 
size_t size)
+ }
+
+ while (size > 0) {
+-l = IO_BUF_SIZE - f->buf_index;
++l = f->buf_allocated_size - f->buf_index;
+ if (l > size) {
+ l = size;
+ }
+@@ -508,8 +517,8 @@ size_t qemu_peek_buffer(QEMUFile *f, uint8_t **buf, size_t 
size, size_t offset)
+ size_t index;
+
+ assert(!qemu_file_is_writable(f));
+-assert(offset < IO_BUF_SIZE);
+-assert(size <= IO_BUF_SIZE - offset);
++assert(offset < f->buf_allocated_size);
++assert(size <= f->buf_allocated_size - offset);
+
+ /* The 1st byte to read from */
+ index = f->buf_index + offset;
+@@ -559,7 +568,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+ size_t res;
+ uint8_t *src;
+
+-res = qemu_peek_buffer(f, , MIN(pending, IO_BUF_SIZE), 0);
++res = qemu_peek_buffer(f, , MIN(pending, f->buf_allocated_size), 
0);
+ if (res == 0) {
+ return done;
+ }
+@@ -593,7 +602,7 @@ size_t qemu_get_buffer(QEMUFile *f, uint8_t *buf, size_t 
size)
+  */
+ size_t qemu_get_buffer_in_place(QEMUFile *f, uint8_t **buf, size_t size)
+ {
+-if (size < IO_BUF_SIZE) {
++if (size < f->buf_allocated_size) {
+ size_t res;
+ uint8_t *src;
+
+@@ -618,7 +627,7 @@ int qemu_peek_byte(QEMUFile *f, int offset)
+ int index = f->buf_index + offset;
+
+ assert(!qemu_file_is_writable(f));
+-assert(offset < IO_BUF_SIZE);
++assert(offset < f->buf_allocated_size);
+
+ if (index >= f->buf_size) {
+ qemu_fill_buffer(f);

Re: [pve-devel] [PATCH qemu 1/2] experimentally move savevm-async back into a coroutine

2020-05-04 Thread Stefan Reiter

Fixes the SIGSEGV issues on Ceph with snapshot and rollback for me, so:

Tested-by: Stefan Reiter 

Just for reference, I also bisected the bug this fixes to upstream 
commit 8c6b0356b53 ("util/async: make bh_aio_poll() O(1)"), i.e. it only 
breaks after this commit. Might be an upstream bug too somewhere? But I 
don't see an issue with doing this in a coroutine either.


See also inline.

On 5/4/20 12:02 PM, Wolfgang Bumiller wrote:

Move qemu_savevm_state_{header,setup} into the main loop and
the rest of the iteration into a coroutine. The former need
to lock the iothread (and we can't unlock it in the
coroutine), and the latter can't deal with being in a
separate thread, so a coroutine it must be.

Signed-off-by: Wolfgang Bumiller 
---
  ...e-savevm-async-back-into-a-coroutine.patch | 111 ++
  debian/patches/series |   1 +
  2 files changed, 112 insertions(+)
  create mode 100644 
debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch

diff --git 
a/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch 
b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
new file mode 100644
index 000..f4945db
--- /dev/null
+++ b/debian/patches/pve/0043-move-savevm-async-back-into-a-coroutine.patch
@@ -0,0 +1,111 @@
+From  Mon Sep 17 00:00:00 2001
+From: Wolfgang Bumiller 
+Date: Thu, 30 Apr 2020 15:55:37 +0200
+Subject: [PATCH] move savevm-async back into a coroutine
+
+Move qemu_savevm_state_{header,setup} into the main loop and
+the rest of the iteration into a coroutine. The former need
+to lock the iothread (and we can't unlock it in the
+coroutine), and the latter can't deal with being in a
+separate thread, so a coroutine it must be.
+
+Signed-off-by: Wolfgang Bumiller 
+---
+ savevm-async.c | 28 +---
+ 1 file changed, 9 insertions(+), 19 deletions(-)
+
+diff --git a/savevm-async.c b/savevm-async.c
+index a38b15d652..af865b9a0a 100644
+--- a/savevm-async.c
 b/savevm-async.c
+@@ -51,7 +51,7 @@ static struct SnapshotState {
+ QEMUFile *file;
+ int64_t total_time;
+ QEMUBH *cleanup_bh;
+-QemuThread thread;
++Coroutine *co;
+ } snap_state;
+
+ SaveVMInfo *qmp_query_savevm(Error **errp)
+@@ -201,11 +201,9 @@ static void process_savevm_cleanup(void *opaque)
+ int ret;
+ qemu_bh_delete(snap_state.cleanup_bh);
+ snap_state.cleanup_bh = NULL;
++snap_state.co = NULL;
+ qemu_savevm_state_cleanup();
+
+-qemu_mutex_unlock_iothread();
+-qemu_thread_join(_state.thread);
+-qemu_mutex_lock_iothread();
+ ret = save_snapshot_cleanup();
+ if (ret < 0) {
+ save_snapshot_error("save_snapshot_cleanup error %d", ret);
+@@ -221,18 +219,13 @@ static void process_savevm_cleanup(void *opaque)
+ }
+ }
+
+-static void *process_savevm_thread(void *opaque)
++static void process_savevm_coro(void *opaque)
+ {
+ int ret;
+ int64_t maxlen;
+ MigrationState *ms = migrate_get_current();
+
+-rcu_register_thread();
+-
+-qemu_savevm_state_header(snap_state.file);
+-qemu_savevm_state_setup(snap_state.file);
+ ret = qemu_file_get_error(snap_state.file);
+-
+ if (ret < 0) {
+ save_snapshot_error("qemu_savevm_state_setup failed");
+ goto out;
+@@ -247,16 +240,13 @@ static void *process_savevm_thread(void *opaque)
+ maxlen = blk_getlength(snap_state.target) - 30*1024*1024;
+
+ if (pending_size > 40 && snap_state.bs_pos + pending_size < 
maxlen) {
+-qemu_mutex_lock_iothread();
+ ret = qemu_savevm_state_iterate(snap_state.file, false);
+ if (ret < 0) {
+ save_snapshot_error("qemu_savevm_state_iterate error %d", 
ret);
+ break;
+ }
+-qemu_mutex_unlock_iothread();
+ DPRINTF("savevm inerate pending size %lu ret %d\n", pending_size, 
ret);
+ } else {
+-qemu_mutex_lock_iothread();
+ qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER, NULL);
+ ret = global_state_store();
+ if (ret) {
+@@ -285,16 +275,12 @@ static void *process_savevm_thread(void *opaque)
+ }
+
+ qemu_bh_schedule(snap_state.cleanup_bh);
+-qemu_mutex_unlock_iothread();
+
+ out:
+ /* set migration state accordingly and clear soon-to-be stale file */
+ migrate_set_state(>state, MIGRATION_STATUS_SETUP,
+   ret ? MIGRATION_STATUS_FAILED : 
MIGRATION_STATUS_COMPLETED);
+ ms->to_dst_file = NULL;
+-
+-rcu_unregister_thread();
+-return NULL;
+ }
+
+ void qmp_savevm_start(bool has_statefile, const char *statefile, Error **errp)
+@@ -373,8 +359,12 @@ void qmp_savevm_start(bool has_statefile, const char 
*statefile, Error **errp)
+
+ snap_state.state = SAVE_STATE_ACTIVE;
+ snap_state.cleanup_bh = qemu_bh_new(process

[pve-devel] [PATCH v2 manager] HDEdit: warn when IO threads are enabled with incompatible controller

2020-04-27 Thread Stefan Reiter
The only warning displayed before was in the "VM Start" task log, rather
hidden. In the wizard we already auto-selected the correct controller, but
not when modifying a disk on an existing VM.

Don't break existing behaviour (to allow users to disable IO threads for
VMs that currently have it set but an incompatible controller), but do warn
them that the setting will be ignored.

Signed-off-by: Stefan Reiter 
---

v2:
* do all checks in updateIOThreadHint
* check insideWizard as part of 'let visible' to avoid extra 'if'
* view.drive is not set when adding a new disk to a VM, so get the controller
  from 'controller' field directly then

 www/manager6/qemu/HDEdit.js | 25 -
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js
index fd890600..4dc10dc6 100644
--- a/www/manager6/qemu/HDEdit.js
+++ b/www/manager6/qemu/HDEdit.js
@@ -35,6 +35,20 @@ Ext.define('PVE.qemu.HDInputPanel', {
this.lookup('scsiController').setVisible(value.match(/^scsi/));
},
 
+   updateIOThreadHint: function(iothread) {
+   let me = this;
+   let view = me.getView();
+   let hint = me.lookupReference('iothreadHint');
+
+   let interface = view.drive.interface ||
+   view.down('field[name=controller]').getValue();
+   let visible = !view.insideWizard && iothread &&
+   interface === 'scsi' &&
+   view.vmconfig.scsihw !== 'virtio-scsi-single';
+
+   hint.setVisible(visible);
+   },
+
control: {
'field[name=controller]': {
change: 'onControllerChange',
@@ -42,6 +56,8 @@ Ext.define('PVE.qemu.HDInputPanel', {
},
'field[name=iothread]' : {
change: function(f, value) {
+   this.updateIOThreadHint(value);
+
if (!this.getView().insideWizard) {
return;
}
@@ -285,7 +301,14 @@ Ext.define('PVE.qemu.HDInputPanel', {
fieldLabel: gettext('Write limit') + ' (ops/s)',
labelWidth: labelWidth,
emptyText: gettext('unlimited')
-   }
+   },
+   {
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   reference: 'iothreadHint',
+   value: gettext("IO threads will only be used with VirtIO Block 
disks or when using the 'VirtIO SCSI single' controller model!"),
+   hidden: true,
+   },
);
 
me.advancedColumn2.push(
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 1/6] api: check Sys.Audit permissions when setting a custom CPU model

2020-04-22 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm | 25 +
 1 file changed, 25 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ec4c18c..fe8c06d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -21,6 +21,7 @@ use PVE::GuestHelpers;
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuServer::Drive;
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuMigrate;
 use PVE::RPCEnvironment;
@@ -236,6 +237,26 @@ my $create_disks = sub {
 return $vollist;
 };
 
+my $check_cpu_model_access = sub {
+my ($rpcenv, $authuser, $new, $existing) = @_;
+
+return if !defined($new->{cpu});
+
+my $cpu = PVE::JSONSchema::check_format('pve-vm-cpu-conf', $new->{cpu});
+return if !$cpu || !$cpu->{cputype}; # always allow default
+my $cputype = $cpu->{cputype};
+
+if ($existing && $existing->{cpu}) {
+   # changing only other settings doesn't require permissions for CPU model
+   my $existingCpu = PVE::JSONSchema::check_format('pve-vm-cpu-conf', 
$existing->{cpu});
+   return if $existingCpu->{cputype} eq $cputype;
+}
+
+if (PVE::QemuServer::CPUConfig::is_custom_model($cputype)) {
+   $rpcenv->check($authuser, "/nodes", ['Sys.Audit']);
+}
+};
+
 my $cpuoptions = {
 'cores' => 1,
 'cpu' => 1,
@@ -543,6 +564,8 @@ __PACKAGE__->register_method({
 
&$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ 
keys %$param]);
 
+   &$check_cpu_model_access($rpcenv, $authuser, $param);
+
foreach my $opt (keys %$param) {
if (PVE::QemuServer::is_valid_drivename($opt)) {
my $drive = PVE::QemuServer::parse_drive($opt, 
$param->{$opt});
@@ -1122,6 +1145,8 @@ my $update_vm_api  = sub {
die "checksum missmatch (file change by other user?)\n"
if $digest && $digest ne $conf->{digest};
 
+   &$check_cpu_model_access($rpcenv, $authuser, $param, $conf);
+
# FIXME: 'suspended' lock should probabyl be a state or "weak" lock?!
if (scalar(@delete) && grep { $_ eq 'vmstate'} @delete) {
if (defined($conf->{lock}) && $conf->{lock} eq 'suspended') {
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 5/6] ui: CPUModelSelector: use API call for store

2020-04-22 Thread Stefan Reiter
CPU models are retrieved from the new /nodes/X/cpu call and ordered by
vendor to approximate the previous sort order (less change for accustomed
users).

With this, custom CPU models are now selectable via the GUI.

Signed-off-by: Stefan Reiter 
---
 www/manager6/form/CPUModelSelector.js | 221 ++
 1 file changed, 81 insertions(+), 140 deletions(-)

diff --git a/www/manager6/form/CPUModelSelector.js 
b/www/manager6/form/CPUModelSelector.js
index 1d28ee88..cdb7ddcb 100644
--- a/www/manager6/form/CPUModelSelector.js
+++ b/www/manager6/form/CPUModelSelector.js
@@ -1,9 +1,19 @@
+Ext.define('PVE.data.CPUModel', {
+extend: 'Ext.data.Model',
+fields: [
+   {name: 'name'},
+   {name: 'vendor'},
+   {name: 'custom'},
+   {name: 'displayname'}
+]
+});
+
 Ext.define('PVE.form.CPUModelSelector', {
 extend: 'Proxmox.form.ComboGrid',
 alias: ['widget.CPUModelSelector'],
 
-valueField: 'value',
-displayField: 'value',
+valueField: 'name',
+displayField: 'displayname',
 
 emptyText: Proxmox.Utils.defaultText + ' (kvm64)',
 allowBlank: true,
@@ -19,157 +29,88 @@ Ext.define('PVE.form.CPUModelSelector', {
columns: [
{
header: gettext('Model'),
-   dataIndex: 'value',
+   dataIndex: 'displayname',
hideable: false,
sortable: true,
-   flex: 2
+   flex: 3
},
{
header: gettext('Vendor'),
dataIndex: 'vendor',
hideable: false,
sortable: true,
-   flex: 1
+   flex: 2
}
],
-   width: 320
+   width: 360
 },
 
 store: {
-   fields: [ 'value', 'vendor' ],
-   data: [
-   {
-   value: 'athlon',
-   vendor: 'AMD'
-   },
-   {
-   value: 'phenom',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G1',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G2',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G3',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G4',
-   vendor: 'AMD'
-   },
-   {
-   value: 'Opteron_G5',
-   vendor: 'AMD'
-   },
-   {
-   value: 'EPYC',
-   vendor: 'AMD'
-   },
-   {
-   value: '486',
-   vendor: 'Intel'
-   },
-   {
-   value: 'core2duo',
-   vendor: 'Intel'
-   },
-   {
-   value: 'coreduo',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium2',
-   vendor: 'Intel'
-   },
-   {
-   value: 'pentium3',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Conroe',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Penryn',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Nehalem',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Westmere',
-   vendor: 'Intel'
-   },
-   {
-   value: 'SandyBridge',
-   vendor: 'Intel'
-   },
-   {
-   value: 'IvyBridge',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Haswell',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Haswell-noTSX',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Broadwell',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Broadwell-noTSX',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Skylake-Client',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Skylake-Server',
-   vendor: 'Intel'
-   },
-   {
-   value: 'Cascadelake-Server',
-   vendor: 'Intel'
-   },
-   {
-   value: 'KnightsMill',
-   vendor: 'Intel'
-   },
-   {
-   value: 'kvm32',
-   vendor: 'QEMU'
-   },
-   {
-   value: 'kvm64',
-   vendor: 'QEMU'
-   },
-   {
-   value: 'qemu32',
-   vendor: 'QEMU'
-   },
-   {
-   value: 'qemu64',
-   vendor: 'QEMU'
-   },
-   {
-   value: 'host',
-   vendor: 'Host'
+   autoLoad: true

[pve-devel] [PATCH 0/6] Custom CPU models API/GUI basics

2020-04-22 Thread Stefan Reiter
Permission handling, the beginnings of the API and getting the GUI to play nice
with custom models (no editor yet, but it'll behave as expected if a determined
user creates a custom model by editing the config).

First 3 patches are API stuff, 4 is an independent UI fix/cleanup, rest are new
GUI stuff.


qemu-server: Stefan Reiter (2):
  api: check Sys.Audit permissions when setting a custom CPU model
  api: allow listing custom and default CPU models

 PVE/API2/Qemu.pm| 25 +++
 PVE/API2/Qemu/CPU.pm| 61 +
 PVE/API2/Qemu/Makefile  |  2 +-
 PVE/QemuServer/CPUConfig.pm | 30 ++
 4 files changed, 117 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/CPU.pm

manager: Stefan Reiter (4):
  api: register /nodes/X/cpu call for CPU models
  ui: ProcessorEdit: fix total core calculation and use view model
  ui: CPUModelSelector: use API call for store
  ui: ProcessorEdit: allow modifications with inaccessible CPU model

 PVE/API2/Nodes.pm |   7 +
 www/manager6/form/CPUModelSelector.js | 221 ++
 www/manager6/qemu/ProcessorEdit.js| 108 ++---
 3 files changed, 170 insertions(+), 166 deletions(-)

-- 
2.26.2

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/6] api: allow listing custom and default CPU models

2020-04-22 Thread Stefan Reiter
More API calls will follow for this path, for now add the 'index' call to
list all custom and default CPU models.

Any user can list the default CPU models, as these are public anyway, but
custom models are restricted to users with Sys.Audit on /nodes.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu/CPU.pm| 61 +
 PVE/API2/Qemu/Makefile  |  2 +-
 PVE/QemuServer/CPUConfig.pm | 30 ++
 3 files changed, 92 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Qemu/CPU.pm

diff --git a/PVE/API2/Qemu/CPU.pm b/PVE/API2/Qemu/CPU.pm
new file mode 100644
index 000..b0bb32d
--- /dev/null
+++ b/PVE/API2/Qemu/CPU.pm
@@ -0,0 +1,61 @@
+package PVE::API2::Qemu::CPU;
+
+use strict;
+use warnings;
+
+use PVE::RESTHandler;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::QemuServer::CPUConfig;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method({
+name => 'index',
+path => '',
+method => 'GET',
+description => 'List all custom and default CPU models.',
+permissions => {
+   user => 'all',
+   description => 'Only returns custom models when the current user has'
+. ' Sys.Audit on /nodes.',
+},
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   },
+},
+returns => {
+   type => 'array',
+   items => {
+   type => 'object',
+   properties => {
+   name => {
+   type => 'string',
+   description => "Name of the CPU model. Identifies it for"
+. " subsequent API calls. Prefixed with"
+. " 'custom-' for custom models.",
+   },
+   custom => {
+   type => 'boolean',
+   description => "True if this is a custom CPU model.",
+   },
+   vendor => {
+   type => 'string',
+   description => "CPU vendor visible to the guest when this"
+. " model is selected. Vendor of"
+. " 'reported-model' in case of custom 
models.",
+   },
+   },
+   },
+   links => [ { rel => 'child', href => '{name}' } ],
+},
+code => sub {
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+   my $include_custom = $rpcenv->check($authuser, "/nodes", ['Sys.Audit'], 
1);
+
+   return PVE::QemuServer::CPUConfig::get_cpu_models($include_custom);
+}});
+
+1;
diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index 20c2a6c..f4b7be6 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm
+SOURCES=Agent.pm CPU.pm
 
 .PHONY: install
 install:
diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 61744dc..b884498 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -293,6 +293,36 @@ sub write_config {
 $class->SUPER::write_config($filename, $cfg);
 }
 
+sub get_cpu_models {
+my ($include_custom) = @_;
+
+my $models = [];
+
+for my $default_model (keys %{$cpu_vendor_list}) {
+   push @$models, {
+   name => $default_model,
+   custom => 0,
+   vendor => $cpu_vendor_list->{$default_model},
+   };
+}
+
+return $models if !$include_custom;
+
+my $conf = load_custom_model_conf();
+for my $custom_model (keys %{$conf->{ids}}) {
+   my $reported_model = $conf->{ids}->{$custom_model}->{'reported-model'};
+   $reported_model //= $cpu_fmt->{'reported-model'}->{default};
+   my $vendor = $cpu_vendor_list->{$reported_model};
+   push @$models, {
+   name => "custom-$custom_model",
+   custom => 1,
+   vendor => $vendor,
+   };
+}
+
+return $models;
+}
+
 sub is_custom_model {
 my ($cputype) = @_;
 return $cputype =~ m/^custom-/;
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 6/6] ui: ProcessorEdit: allow modifications with inaccessible CPU model

2020-04-22 Thread Stefan Reiter
An administrator can set a custom CPU model for a VM where the general user
does not have permission to use this particular model. Prior to this change
the ProcessorEdit component would be broken by this, since the store of the
CPU type selector did not contain the configured CPU model.

Add it in manually if this situation occurs (with 'Unknown' vendor, since
we cannot retrieve it from the API), but warn the user that changing it
would be an irreversible action.

Signed-off-by: Stefan Reiter 
---
 www/manager6/qemu/ProcessorEdit.js | 43 ++
 1 file changed, 43 insertions(+)

diff --git a/www/manager6/qemu/ProcessorEdit.js 
b/www/manager6/qemu/ProcessorEdit.js
index d555b2d8..0eba746c 100644
--- a/www/manager6/qemu/ProcessorEdit.js
+++ b/www/manager6/qemu/ProcessorEdit.js
@@ -9,6 +9,7 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
data: {
socketCount: 1,
coreCount: 1,
+   showCustomModelPermWarning: false,
},
formulas: {
totalCoreCount: get => get('socketCount') * get('coreCount'),
@@ -73,6 +74,27 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
let totalCoreDisplay = me.lookupReference('totalcores');
totalCoreDisplay.originalValue = values.cores * values.sockets;
 
+   let type = values.cputype;
+   let typeSelector = me.lookupReference('cputype');
+   let typeStore = typeSelector.getStore();
+   typeStore.on('load', (store, records, success) => {
+   if (!success || !type || records.some(x => x.data.name === type)) {
+   return;
+   }
+
+   // if we get here, a custom CPU model is selected for the VM but we
+   // don't have permission to configure it - it will not be in the
+   // list retrieved from the API, so add it manually to allow changing
+   // other processor options
+   typeStore.add({
+   name: type,
+   displayname: type.replace(/custom-/, ''),
+   custom: 1,
+   vendor: gettext("Unknown"),
+   });
+   typeSelector.select(type);
+   });
+
me.callParent([values]);
 },
 
@@ -109,6 +131,7 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
{
xtype: 'CPUModelSelector',
name: 'cputype',
+   reference: 'cputype',
fieldLabel: gettext('Type')
},
{
@@ -122,6 +145,18 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
},
 ],
 
+columnB: [
+   {
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   value: gettext('WARNING: You do not have permission to configure 
custom CPU types, if you change the type you will not be able to go back!'),
+   hidden: true,
+   bind: {
+   hidden: '{!showCustomModelPermWarning}',
+   },
+   },
+],
+
 advancedColumn1: [
{
xtype: 'proxmoxintegerfield',
@@ -209,6 +244,14 @@ Ext.define('PVE.qemu.ProcessorEdit', {
if (cpu.flags) {
data.flags = cpu.flags;
}
+
+   let caps = Ext.state.Manager.get('GuiCap');
+   if (data.cputype.indexOf('custom-') === 0 &&
+   !caps.nodes['Sys.Audit'])
+   {
+   let vm = ipanel.getViewModel();
+   vm.set("showCustomModelPermWarning", true);
+   }
}
me.setValues(data);
}
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/6] ui: ProcessorEdit: fix total core calculation and use view model

2020-04-22 Thread Stefan Reiter
Clean up the code in ProcessorEdit with a view model and fix a bug while at
it - previously, pressing the 'Reset' button on the form would always set
the value of the total core count field to 1.

Signed-off-by: Stefan Reiter 
---

The fix is technically only the setValues part, but I started off thinking that
using a view model would also fix it - it did not, but I still think it looks
nicer than before.

 www/manager6/qemu/ProcessorEdit.js | 65 ++
 1 file changed, 39 insertions(+), 26 deletions(-)

diff --git a/www/manager6/qemu/ProcessorEdit.js 
b/www/manager6/qemu/ProcessorEdit.js
index bc17e152..d555b2d8 100644
--- a/www/manager6/qemu/ProcessorEdit.js
+++ b/www/manager6/qemu/ProcessorEdit.js
@@ -5,28 +5,18 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
 
 insideWizard: false,
 
-controller: {
-   xclass: 'Ext.app.ViewController',
-
-   updateCores: function() {
-   var me = this.getView();
-   var sockets = me.down('field[name=sockets]').getValue();
-   var cores = me.down('field[name=cores]').getValue();
-   me.down('field[name=totalcores]').setValue(sockets*cores);
-   var vcpus = me.down('field[name=vcpus]');
-   vcpus.setMaxValue(sockets*cores);
-   vcpus.setEmptyText(sockets*cores);
-   vcpus.validate();
+viewModel: {
+   data: {
+   socketCount: 1,
+   coreCount: 1,
},
+   formulas: {
+   totalCoreCount: get => get('socketCount') * get('coreCount'),
+   },
+},
 
-   control: {
-   'field[name=sockets]': {
-   change: 'updateCores'
-   },
-   'field[name=cores]': {
-   change: 'updateCores'
-   }
-   }
+controller: {
+   xclass: 'Ext.app.ViewController',
 },
 
 onGetValues: function(values) {
@@ -76,6 +66,16 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
return values;
 },
 
+setValues: function(values) {
+   let me = this;
+
+   // set the original value here, else 'reset' clears the field
+   let totalCoreDisplay = me.lookupReference('totalcores');
+   totalCoreDisplay.originalValue = values.cores * values.sockets;
+
+   me.callParent([values]);
+},
+
 cpu: {},
 
 column1: [
@@ -86,7 +86,10 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
maxValue: 4,
value: '1',
fieldLabel: gettext('Sockets'),
-   allowBlank: false
+   allowBlank: false,
+   bind: {
+   value: '{socketCount}',
+   },
},
{
xtype: 'proxmoxintegerfield',
@@ -95,8 +98,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
maxValue: 128,
value: '1',
fieldLabel: gettext('Cores'),
-   allowBlank: false
-   }
+   allowBlank: false,
+   bind: {
+   value: '{coreCount}',
+   },
+   },
 ],
 
 column2: [
@@ -109,8 +115,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
xtype: 'displayfield',
fieldLabel: gettext('Total cores'),
name: 'totalcores',
-   value: '1'
-   }
+   reference: 'totalcores',
+   bind: {
+   value: '{totalCoreCount}',
+   },
+   },
 ],
 
 advancedColumn1: [
@@ -123,7 +132,11 @@ Ext.define('PVE.qemu.ProcessorInputPanel', {
fieldLabel: gettext('VCPUs'),
deleteEmpty: true,
allowBlank: true,
-   emptyText: '1'
+   emptyText: '1',
+   bind: {
+   emptyText: '{totalCoreCount}',
+   maxValue: '{totalCoreCount}',
+   },
},
{
xtype: 'numberfield',
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/6] api: register /nodes/X/cpu call for CPU models

2020-04-22 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Depends on updated qemu-server.

 PVE/API2/Nodes.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 58497b2b..2ac838ea 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -34,6 +34,7 @@ use PVE::API2::Tasks;
 use PVE::API2::Scan;
 use PVE::API2::Storage::Status;
 use PVE::API2::Qemu;
+use PVE::API2::Qemu::CPU;
 use PVE::API2::LXC;
 use PVE::API2::LXC::Status;
 use PVE::API2::VZDump;
@@ -65,6 +66,11 @@ __PACKAGE__->register_method ({
 path => 'qemu',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Qemu::CPU",
+path => 'cpu',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::LXC",
 path => 'lxc',
@@ -241,6 +247,7 @@ __PACKAGE__->register_method ({
{ name => 'certificates' },
{ name => 'config' },
{ name => 'hosts' },
+   { name => 'cpu' },
];
 
push @$result, { name => 'sdn' } if $have_sdn;
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] fix #2697: map netdev_add options to correct json types

2020-04-21 Thread Stefan Reiter

On 21/04/2020 16:43, Dominik Csapak wrote:

On 4/21/20 4:38 PM, Stefan Reiter wrote:

On 21/04/2020 16:01, Dominik Csapak wrote:

netdev_add is now a proper qmp command, which means that it verifies
the parameter types properly



The patch looks reasonable, but I'm confused how that worked before? 
'git blame' tells me the typed QMP schema for netdev_add has been in 
QEMU since 2017..




yes but it changed recently (17. March 2020)
its commit:

db2a380c84574d8c76d7193b8af8535234fe5156
net: Complete qapi-fication of netdev_add

it changes netdev_add from

{ 'command': 'netdev_add',
   'data': {'type': 'str', 'id': 'str'},
   'gen': false }    # so we can get the additional arguments

to
{ 'command': 'netdev_add', 'data': 'Netdev', 'boxed': true }



ahh must have missed that - makes more sense now.
Then I suppose:

Reviewed-by: Stefan Reiter 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] fix #2697: map netdev_add options to correct json types

2020-04-21 Thread Stefan Reiter

On 21/04/2020 16:01, Dominik Csapak wrote:

netdev_add is now a proper qmp command, which means that it verifies
the parameter types properly



The patch looks reasonable, but I'm confused how that worked before? 
'git blame' tells me the typed QMP schema for netdev_add has been in 
QEMU since 2017..



instead of sending strings, we now have to choose the correct
types for the parameters

bool for vhost
and uint64 for queues

Signed-off-by: Dominik Csapak 
---
i checked and i found no other parameter or qmp call that needs
changing, but maybe someone else can also check this, just to be sure

  PVE/QemuServer.pm | 8 
  1 file changed, 8 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 37c7320..030e04b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4047,6 +4047,14 @@ sub qemu_netdevadd {
  my $netdev = print_netdev_full($vmid, $conf, $arch, $device, $deviceid, 
1);
  my %options =  split(/[=,]/, $netdev);
  
+if (defined(my $vhost = $options{vhost})) {

+   $options{vhost} = JSON::boolean(PVE::JSONSchema::parse_boolean($vhost));
+}
+
+if (defined(my $queues = $options{queues})) {
+   $options{queues} = $queues + 0;
+}
+
  mon_cmd($vmid, "netdev_add",  %options);
  return 1;
  }



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 0/3] Support all 8 corosync3 links in GUI

2020-04-21 Thread Stefan Reiter

ping, I think we'd want this in 6.2?

On 23/03/2020 13:41, Stefan Reiter wrote:

v2 -> v3:
* add patch 1 (localization fix)
* implement changes from Dominik's review:
   * use 'let' in new code
   * use references for element lookup
   * some code style nits
   * fix formatting (simpler in general with hbox, and also should work for all
 languages now)
   * fix IPv6 address selection

Note: I didn't include any changes to onInputTypeChange as proposed by Dominik
(using bindings instead of set-function calls). I couldn't quite wrap my head
around the ExtJS side of that, so usually I'd like to talk this through in
person, but since Dominik's on vacation and talking face-to-face right now is...
well, not recommended in general, I left it out for now.

This could easily be done in a followup though and wouldn't change the
interface for the user, so I hope that's okay.

RFC -> v2:
* rebased on master
* slight rewording


manager: Stefan Reiter (3):
   gui/cluster: fix translation for cluster join button
   gui/cluster: add CorosyncLinkEdit component to support up to 8 links
   gui/cluster: add structured peerLinks to join info

  www/manager6/Makefile   |   1 +
  www/manager6/dc/Cluster.js  |  13 +-
  www/manager6/dc/ClusterEdit.js  | 194 ++---
  www/manager6/dc/CorosyncLinkEdit.js | 425 
  4 files changed, 534 insertions(+), 99 deletions(-)
  create mode 100644 www/manager6/dc/CorosyncLinkEdit.js



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] HDEdit: warn when IO threads are enabled with incompatible controller

2020-04-16 Thread Stefan Reiter
The only warning displayed before was in the "VM Start" task log, rather
hidden. In the wizard we already auto-selected the correct controller, but
not when modifying a disk on an existing VM.

Don't break existing behaviour (to allow users to disable IO threads for
VMs that currently have it set but an incompatible controller), but do warn
them that the setting will be ignored.

Signed-off-by: Stefan Reiter 
---
 www/manager6/qemu/HDEdit.js | 28 +++-
 1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js
index fd890600..d50fe077 100644
--- a/www/manager6/qemu/HDEdit.js
+++ b/www/manager6/qemu/HDEdit.js
@@ -35,6 +35,19 @@ Ext.define('PVE.qemu.HDInputPanel', {
this.lookup('scsiController').setVisible(value.match(/^scsi/));
},
 
+   updateIOThreadHint: function() {
+   let me = this;
+   let view = me.getView();
+   if (view.insideWizard) {
+   // we autoselect a compatible controller in the wizard
+   return;
+   }
+
+   let visible = view.drive.interface === 'scsi' &&
+   view.vmconfig.scsihw !== 'virtio-scsi-single';
+   me.lookupReference('iothreadHint').setVisible(visible);
+   },
+
control: {
'field[name=controller]': {
change: 'onControllerChange',
@@ -42,6 +55,12 @@ Ext.define('PVE.qemu.HDInputPanel', {
},
'field[name=iothread]' : {
change: function(f, value) {
+   if (value) {
+   this.updateIOThreadHint();
+   } else {
+   this.lookupReference('iothreadHint').setVisible(false);
+   }
+
if (!this.getView().insideWizard) {
return;
}
@@ -285,7 +304,14 @@ Ext.define('PVE.qemu.HDInputPanel', {
fieldLabel: gettext('Write limit') + ' (ops/s)',
labelWidth: labelWidth,
emptyText: gettext('unlimited')
-   }
+   },
+   {
+   xtype: 'displayfield',
+   userCls: 'pmx-hint',
+   reference: 'iothreadHint',
+   value: gettext("IO threads will only be used with VirtIO Block 
disks or when using the 'VirtIO SCSI single' controller model!"),
+   hidden: true,
+   },
);
 
me.advancedColumn2.push(
-- 
2.26.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/2] fix #2671: include CPU format in man page again

2020-04-16 Thread Stefan Reiter
Use the new register_format(3) call to use a validator (instead of a
parser) for 'pve-vm-cpu-conf'. This way the $cpu_fmt hash can be used for
generating the documentation, while still applying the same verification
rules as before.

'pve-cpu-conf' is reduced to a simple hash-based format, which currently is
inconsequential as it is not used anywhere, and in the future simply means
that the !$cpu->{cputype} check has to be performed in the API calls which
will use it.

Since the function no longer parses but only verifies, the parsing in
print_cpu_device/get_cpu_options has to go via JSONSchema directly.

With those changes it became visible that the 'phys-bits' format was
actually invalid, so add a format_description to fix it.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm | 60 -
 1 file changed, 19 insertions(+), 41 deletions(-)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 61744dc..85f6ee4 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -153,6 +153,7 @@ my $cpu_fmt = {
 'phys-bits' => {
type => 'string',
format => 'pve-phys-bits',
+   format_description => '8-64|host',
description => "The physical memory address bits that are reported to"
 . " the guest OS. Should be smaller or equal to the 
host's."
 . " Set to 'host' to use value from host CPU, but note 
that"
@@ -183,56 +184,33 @@ sub parse_phys_bits {
 # $cpu_fmt describes both the CPU config passed as part of a VM config, as well
 # as the definition of a custom CPU model. There are some slight differences
 # though, which we catch in the custom verification function below.
-PVE::JSONSchema::register_format('pve-cpu-conf', \_cpu_conf_basic);
-sub parse_cpu_conf_basic {
-my ($cpu_str, $noerr) = @_;
-
-my $cpu = eval { PVE::JSONSchema::parse_property_string($cpu_fmt, 
$cpu_str) };
-if ($@) {
-die $@ if !$noerr;
-return undef;
-}
-
-# required, but can't be forced in schema since it's encoded in section
-# header for custom models
-if (!$cpu->{cputype}) {
-   die "CPU is missing cputype\n" if !$noerr;
-   return undef;
-}
-
-return $cpu;
-}
+PVE::JSONSchema::register_format('pve-cpu-conf', $cpu_fmt);
+PVE::JSONSchema::register_format('pve-vm-cpu-conf', $cpu_fmt, 
\_vm_cpu_conf);
+sub validate_vm_cpu_conf {
+my ($cpu) = @_;
 
-PVE::JSONSchema::register_format('pve-vm-cpu-conf', \_vm_cpu_conf);
-sub parse_vm_cpu_conf {
-my ($cpu_str, $noerr) = @_;
-
-my $cpu = parse_cpu_conf_basic($cpu_str, $noerr);
-return undef if !$cpu;
+die "internal error: cannot validate CPU model after parsing error\n"
+   if !$cpu;
 
 my $cputype = $cpu->{cputype};
 
+# required, but can't be forced in schema since it's encoded in section
+# header for custom models
+die "CPU is missing cputype\n" if !$cputype;
+
 # a VM-specific config is only valid if the cputype exists
 if (is_custom_model($cputype)) {
-   eval { get_custom_model($cputype); };
-   if ($@) {
-   die $@ if !$noerr;
-   return undef;
-   }
+   # dies on unknown model
+   get_custom_model($cputype);
 } else {
-   if (!defined($cpu_vendor_list->{$cputype})) {
-   die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
prefix?)\n" if !$noerr;
-   return undef;
-   }
+   die "Built-in cputype '$cputype' is not defined (missing 'custom-' 
prefix?)\n"
+   if !defined($cpu_vendor_list->{$cputype});
 }
 
 # in a VM-specific config, certain properties are limited/forbidden
 
-if ($cpu->{flags} && $cpu->{flags} !~ 
m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/) {
-   die "VM-specific CPU flags must be a subset of: @{[join(', ', 
@supported_cpu_flags)]}\n"
-   if !$noerr;
-   return undef;
-}
+die "VM-specific CPU flags must be a subset of: @{[join(', ', 
@supported_cpu_flags)]}\n"
+   if ($cpu->{flags} && $cpu->{flags} !~ 
m/$cpu_flag_supported_re(;$cpu_flag_supported_re)*/);
 
 die "Property 'reported-model' not allowed in VM-specific CPU config.\n"
if defined($cpu->{'reported-model'});
@@ -329,7 +307,7 @@ sub print_cpu_device {
 my $kvm = $conf->{kvm} // 1;
 my $cpu = $kvm ? "kvm64" : "qemu64";
 if (my $cputype = $conf->{cpu}) {
-   my $cpuconf = parse_cpu_conf_basic($cputype)
+   my $cpuconf = PVE::JSONSchema::check_format('pve-vm-cpu-conf', $cputype)
or die "Cannot parse cpu description: $cputype\n";
$cpu = $cpuconf->{cputype};
 
@@ -441,7 +419,7 @@ sub get_cpu_options {
 my $custom_cpu;
 my $hv_vendor_id;
 if (my $cpu_prop_st

[pve-devel] [PATCH common 1/2] JSONSchema: add format validator support and cleanup check_format

2020-04-16 Thread Stefan Reiter
Adds a third, optional parameter to register_format that allows specifying
a function that will be called after parsing and can validate the parsed
data. A validator should die on failed validation, and can also change the
parsed object by returning a modified version of it.

This is useful so one can register a format with its hash, thus allowing
documentation to be generated automatically, while still enforcing certain
validation rules.

And since I found it impossible to extend the current check_format code,
clean that function up as well (which pretty much amounts to a rewrite).
Also use get_format consistently to avoid future breakages.

No existing functionality is intentionally changed.

Signed-off-by: Stefan Reiter 
---

@Fabian G.: Since we discussed this off-list, is this good? At least there
shouldn't be a possibility for a parser/hash format drift the way I've
implemented it.

 src/PVE/JSONSchema.pm | 60 +++
 1 file changed, 38 insertions(+), 22 deletions(-)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 01a3cce..106a812 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -120,19 +120,26 @@ register_standard_option('pve-snapshot-name', {
 });
 
 my $format_list = {};
+my $format_validators = {};
 
 sub register_format {
-my ($format, $code) = @_;
+my ($name, $format, $validator) = @_;
 
-die "JSON schema format '$format' already registered\n"
-   if $format_list->{$format};
+die "JSON schema format '$name' already registered\n"
+   if $format_list->{$name};
 
-$format_list->{$format} = $code;
+if ($validator) {
+   die "A \$validator function can only be specified for hash-based 
formats\n"
+   if ref($format) ne 'HASH';
+   $format_validators->{$name} = $validator;
+}
+
+$format_list->{$name} = $format;
 }
 
 sub get_format {
-my ($format) = @_;
-return $format_list->{$format};
+my ($name) = @_;
+return $format_list->{$name};
 }
 
 my $renderer_hash = {};
@@ -646,13 +653,16 @@ sub pve_verify_tfa_secret {
 sub check_format {
 my ($format, $value, $path) = @_;
 
-return parse_property_string($format, $value, $path) if ref($format) eq 
'HASH';
 return if $format eq 'regex';
 
-if ($format =~ m/^(.*)-a?list$/) {
+my $parsed;
 
-   my $code = $format_list->{$1};
+if (ref($format) eq 'HASH') {
+   # in case it's a hash we can't have a validator registered, so return
+   return parse_property_string($format, $value, $path);
 
+} elsif ($format =~ m/^(.*)-a?list$/) {
+   my $code = get_format($1);
die "undefined format '$format'\n" if !$code;
 
# Note: we allow empty lists
@@ -660,25 +670,31 @@ sub check_format {
&$code($v);
}
 
-} elsif ($format =~ m/^(.*)-opt$/) {
-
-   my $code = $format_list->{$1};
+   # since the list might contain multiple values, we don't allow running
+   # validator functions either
+   return undef;
 
+} elsif ($format =~ m/^(.*)-opt$/) {
+   my $code = get_format($1);
die "undefined format '$format'\n" if !$code;
 
-   return if !$value; # allow empty string
-
-   &$code($value);
+   # empty string is allowed
+   $parsed = $code->($value) if $value;
 
} else {
+   my $registered_format = get_format($format);
+   die "undefined format '$format'\n" if !$registered_format;
 
-   my $code = $format_list->{$format};
-
-   die "undefined format '$format'\n" if !$code;
-
-   return parse_property_string($code, $value, $path) if ref($code) eq 
'HASH';
-   &$code($value);
+   if (ref($registered_format) eq 'HASH') {
+   $parsed = parse_property_string($registered_format, $value, $path);
+   } else {
+   $parsed = $registered_format->($value);
+   }
 }
+
+my $validator = $format_validators->{$format};
+$parsed = $validator->($parsed) if $validator;
+return $parsed;
 }
 
 sub parse_size {
@@ -735,7 +751,7 @@ sub parse_property_string {
 
 # Support named formats here, too:
 if (!ref($format)) {
-   if (my $desc = $format_list->{$format}) {
+   if (my $desc = get_format($format)) {
$format = $desc;
} else {
die "unknown format: $format\n";
-- 
2.26.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] Ignore version checks when using QEMU -rc releases

2020-04-08 Thread Stefan Reiter
Upstream marks these as having a micro-version of >=90, unfortunately the
machine versions are bumped earlier so testing them is made unnecessarily
difficult, since the version checking code would abort on migrations etc...

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e9eb421..4d05db3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2973,13 +2973,18 @@ sub config_to_command {
 
 $machine_version =~ m/(\d+)\.(\d+)/;
 my ($machine_major, $machine_minor) = ($1, $2);
-die "Installed QEMU version '$kvmver' is too old to run machine type 
'$machine_type', please upgrade node '$nodename'\n"
-   if !PVE::QemuServer::min_version($kvmver, $machine_major, 
$machine_minor);
 
-if 
(!PVE::QemuServer::Machine::can_run_pve_machine_version($machine_version, 
$kvmver)) {
-   my $max_pve_version = 
PVE::QemuServer::Machine::get_pve_version($machine_version);
-   die "Installed qemu-server (max feature level for 
$machine_major.$machine_minor is pve$max_pve_version)"
- . " is too old to run machine type '$machine_type', please upgrade 
node '$nodename'\n";
+if ($kvmver =~ m/^\d+\.\d+\.(\d+)/ && $1 >= 90) {
+   warn "warning: Installed QEMU version ($kvmver) is a release candidate, 
ignoring version checks\n";
+} else {
+   die "Installed QEMU version '$kvmver' is too old to run machine type 
'$machine_type', please upgrade node '$nodename'\n"
+   if !PVE::QemuServer::min_version($kvmver, $machine_major, 
$machine_minor);
+
+   if 
(!PVE::QemuServer::Machine::can_run_pve_machine_version($machine_version, 
$kvmver)) {
+   my $max_pve_version = 
PVE::QemuServer::Machine::get_pve_version($machine_version);
+   die "Installed qemu-server (max feature level for 
$machine_major.$machine_minor is pve$max_pve_version)"
+   . " is too old to run machine type '$machine_type', please upgrade 
node '$nodename'\n";
+   }
 }
 
 # if a specific +pve version is required for a feature, use $version_guard
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 qemu-server 2/6] Include "-cpu" parameter with snapshots/suspend

2020-04-07 Thread Stefan Reiter
Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.

Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.

No functional change with non-custom CPU types intended.

Signed-off-by: Stefan Reiter 
---

Changes in v10:
* Use PVE::QemuServer::CPUConfig::get_cpu_from_running_vm helper

 PVE/API2/Qemu.pm  |  1 +
 PVE/QemuConfig.pm | 26 +++---
 PVE/QemuServer.pm | 31 ++-
 3 files changed, 42 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 9e453d2..ff61e4d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1129,6 +1129,7 @@ my $update_vm_api  = sub {
push @delete, 'lock'; # this is the real deal to write it out
}
push @delete, 'runningmachine' if $conf->{runningmachine};
+   push @delete, 'runningcpu' if $conf->{runningcpu};
}
 
PVE::QemuConfig->check_lock($conf) if !$skiplock;
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index f32618e..d29b88b 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::AbstractConfig;
 use PVE::INotify;
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -186,15 +187,20 @@ sub __snapshot_save_vmstate {
 my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, $vmid, 
'raw', $name, $size*1024);
 my $runningmachine = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
-if ($suspend) {
-   $conf->{vmstate} = $statefile;
-   $conf->{runningmachine} = $runningmachine;
-} else {
-   my $snap = $conf->{snapshots}->{$snapname};
-   $snap->{vmstate} = $statefile;
-   $snap->{runningmachine} = $runningmachine;
+# get current QEMU -cpu argument to ensure consistency of custom CPU models
+my $runningcpu;
+if (my $pid = PVE::QemuServer::check_running($vmid)) {
+   $runningcpu = PVE::QemuServer::CPUConfig::get_cpu_from_running_vm($pid);
+}
+
+if (!$suspend) {
+   $conf = $conf->{snapshots}->{$snapname};
 }
 
+$conf->{vmstate} = $statefile;
+$conf->{runningmachine} = $runningmachine;
+$conf->{runningcpu} = $runningcpu;
+
 return $statefile;
 }
 
@@ -340,6 +346,11 @@ sub __snapshot_rollback_hook {
if (defined($conf->{runningmachine})) {
$data->{forcemachine} = $conf->{runningmachine};
delete $conf->{runningmachine};
+
+   # runningcpu is newer than runningmachine, so assume it only exists
+   # here, if at all
+   $data->{forcecpu} = delete $conf->{runningcpu}
+   if defined($conf->{runningcpu});
} else {
# Note: old code did not store 'machine', so we try to be smart
# and guess the snapshot was generated with kvm 1.4 (pc-i440fx-1.4).
@@ -395,6 +406,7 @@ sub __snapshot_rollback_vm_start {
 my $params = {
statefile => $vmstate,
forcemachine => $data->{forcemachine},
+   forcecpu => $data->{forcecpu},
 };
 PVE::QemuServer::vm_start($storecfg, $vmid, $params);
 }
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a0f8429..2d5ae7d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -567,8 +567,15 @@ EODESCR
optional => 1,
 }),
 runningmachine => get_standard_option('pve-qemu-machine', {
-   description => "Specifies the Qemu machine type of the running vm. This 
is used internally for snapshots.",
+   description => "Specifies the QEMU machine type of the running vm. This 
is used internally for snapshots.",
 }),
+runningcpu => {
+   description => "Specifies the QEMU '-cpu' parameter of the running vm. 
This is used internally for snapshots.",
+   optional => 1,
+   type => 'string',
+   pattern => $PVE::QemuServer::CPUConfig::qemu_cmdline_cpu_re,
+   format_description => 'QEMU -cpu parameter'
+},
 machine => get_standard_option('pve-qemu-machine'),
 arch => {
description => "Virtual processor architecture. Defaults to the host.",
@@ -1948,7 +1955,8 @@ sub json_config_properties {
 my $prop = shift;
 
 foreach my $opt (keys %$confdesc) {
-   next if $opt eq 'parent' || $opt eq 'snaptime' || $opt eq 'vmstate' || 
$opt eq 'runningmachine';
+   next if $opt eq 'parent' || $opt eq 'snaptime' || $opt eq 'vmstate' ||
+   $opt eq 'runningmachine' || $opt eq 'runningcpu';
$prop->{$opt} = $confdesc->{$opt};
 }
 
@@ -4845,14 +4853,16 @@ sub vm_start_nolock {
 PVE::Guest

[pve-devel] [PATCH v10 qemu-server 4/6] Rework get_cpu_options and allow custom CPU models

2020-04-07 Thread Stefan Reiter
If a cputype is custom (check via prefix), try to load options from the
custom CPU model config, and set values accordingly.

While at it, extract currently hardcoded values into seperate sub and add
reasonings.

Since the new flag resolving outputs flags in sorted order for
consistency, adapt the test cases to not break. Only the order is
changed, not which flags are present.

Signed-off-by: Stefan Reiter 
Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 
---
 PVE/QemuServer/CPUConfig.pm   | 191 +-
 test/cfg2cmd/efi-raw-old.conf.cmd |   2 +-
 test/cfg2cmd/efi-raw.conf.cmd |   2 +-
 test/cfg2cmd/i440fx-win10-hostpci.conf.cmd|   2 +-
 test/cfg2cmd/minimal-defaults.conf.cmd|   2 +-
 test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
 test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/simple1.conf.cmd |   2 +-
 test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
 test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
 test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
 test/cfg2cmd/spice-win.conf.cmd   |   2 +-
 14 files changed, 154 insertions(+), 63 deletions(-)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 9fa6af9..af31b2b 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -384,99 +384,190 @@ sub print_cpuflag_hash {
 return $formatted;
 }
 
+sub parse_cpuflag_list {
+my ($re, $reason, $flaglist) = @_;
+
+my $res = {};
+return $res if !$flaglist;
+
+foreach my $flag (split(";", $flaglist)) {
+   if ($flag =~ $re) {
+   $res->{$2} = { op => $1, reason => $reason };
+   }
+}
+
+return $res;
+}
+
 # Calculate QEMU's '-cpu' argument from a given VM configuration
 sub get_cpu_options {
 my ($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, 
$gpu_passthrough) = @_;
 
-my $cpuFlags = [];
-my $ostype = $conf->{ostype};
-
-my $cpu = $kvm ? "kvm64" : "qemu64";
+my $cputype = $kvm ? "kvm64" : "qemu64";
 if ($arch eq 'aarch64') {
-   $cpu = 'cortex-a57';
-}
-my $hv_vendor_id;
-if (my $cputype = $conf->{cpu}) {
-   my $cpuconf = PVE::JSONSchema::parse_property_string($cpu_fmt, $cputype)
-   or die "Cannot parse cpu description: $cputype\n";
-   $cpu = $cpuconf->{cputype};
-   $kvm_off = 1 if $cpuconf->{hidden};
-   $hv_vendor_id = $cpuconf->{'hv-vendor-id'};
-
-   if (defined(my $flags = $cpuconf->{flags})) {
-   push @$cpuFlags, split(";", $flags);
-   }
+   $cputype = 'cortex-a57';
 }
 
-push @$cpuFlags , '+lahf_lm' if $cpu eq 'kvm64' && $arch eq 'x86_64';
+my $cpu = {};
+my $custom_cpu;
+my $hv_vendor_id;
+if (my $cpu_prop_str = $conf->{cpu}) {
+   $cpu = parse_vm_cpu_conf($cpu_prop_str)
+   or die "Cannot parse cpu description: $cpu_prop_str\n";
 
-push @$cpuFlags , '-x2apic' if $ostype && $ostype eq 'solaris';
+   $cputype = $cpu->{cputype};
 
-push @$cpuFlags, '+sep' if $cpu eq 'kvm64' || $cpu eq 'kvm32';
+   if (is_custom_model($cputype)) {
+   $custom_cpu = get_custom_model($cputype);
 
-push @$cpuFlags, '-rdtscp' if $cpu =~ m/^Opteron/;
+   $cputype = $custom_cpu->{'reported-model'} //
+   $cpu_fmt->{'reported-model'}->{default};
+   $kvm_off = $custom_cpu->{hidden}
+   if defined($custom_cpu->{hidden});
+   $hv_vendor_id = $custom_cpu->{'hv-vendor-id'};
+   }
 
-if (min_version($machine_version, 2, 3) && $arch eq 'x86_64') {
+   # VM-specific settings override custom CPU config
+   $kvm_off = $cpu->{hidden}
+   if defined($cpu->{hidden});
+   $hv_vendor_id = $cpu->{'hv-vendor-id'}
+   if defined($cpu->{'hv-vendor-id'});
+}
 
-   push @$cpuFlags , '+kvm_pv_unhalt' if $kvm;
-   push @$cpuFlags , '+kvm_pv_eoi' if $kvm;
+my $pve_flags = get_pve_cpu_flags($conf, $kvm, $cputype, $arch,
+ $machine_version);
+
+my $hv_flags = get_hyperv_enlightenments($winversion, $machine_version,
+   $conf->{bios}, $gpu_passthrough, $hv_vendor_id) if $kvm;
+
+my $custom_cputype_flags = parse_cpuflag_list($cpu_flag_any_re,
+   "set by custom CPU model", $custom_cpu->{flags});
+
+my $vm_flags = parse_cpuflag_list($cpu_flag_supported_re,
+   "manually set for VM", $cpu->{flags});
+
+my $pve_forced_flags = {};
+$pve_forced_flags->{'enforce'} = {
+   reason => "error if requested CPU settings not available",
+} if $cputype ne 'host' && $kvm &&a

[pve-devel] [PATCH v10 qemu-server 5/6] fix #2318: allow phys-bits CPU setting

2020-04-07 Thread Stefan Reiter
Can be specified for a particular VM or via a custom CPU model (VM takes
precedence).

QEMU's default limit only allows up to 1TB of RAM per VM. Increasing the
physical address bits available to a VM can fix this.

Signed-off-by: Stefan Reiter 
---

Changes in v10:
* Change phys-bits format to 'host'/8-64 instead of seperate 'host-phys-bits'
  flag

 PVE/QemuServer/CPUConfig.pm | 41 +
 1 file changed, 41 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index af31b2b..b08588e 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -149,8 +149,36 @@ my $cpu_fmt = {
pattern => qr/$cpu_flag_any_re(;$cpu_flag_any_re)*/,
optional => 1,
 },
+'phys-bits' => {
+   type => 'string',
+   format => 'pve-phys-bits',
+   description => "The physical memory address bits that are reported to"
+. " the guest OS. Should be smaller or equal to the 
host's."
+. " Set to 'host' to use value from host CPU, but note 
that"
+. " doing so will break live migration to CPUs with other 
values.",
+   optional => 1,
+},
 };
 
+PVE::JSONSchema::register_format('pve-phys-bits', \_phys_bits);
+sub parse_phys_bits {
+my ($str, $noerr) = @_;
+
+my $err_msg = "value must be an integer between 8 and 64 or 'host'\n";
+
+if ($str !~ m/^(host|\d{1,2})$/) {
+   die $err_msg if !$noerr;
+   return undef;
+}
+
+if ($str =~ m/^\d+$/ && (int($str) < 8 || int($str) > 64)) {
+   die $err_msg if !$noerr;
+   return undef;
+}
+
+return $str;
+}
+
 # $cpu_fmt describes both the CPU config passed as part of a VM config, as well
 # as the definition of a custom CPU model. There are some slight differences
 # though, which we catch in the custom verification function below.
@@ -472,6 +500,19 @@ sub get_cpu_options {
 $cpu_str .= resolve_cpu_flags($pve_flags, $hv_flags, $custom_cputype_flags,
  $vm_flags, $pve_forced_flags);
 
+my $phys_bits = '';
+foreach my $conf ($custom_cpu, $cpu) {
+   next if !defined($conf);
+   my $conf_val = $conf->{'phys-bits'};
+   next if !$conf_val;
+   if ($conf_val eq 'host') {
+   $phys_bits = ",host-phys-bits=true";
+   } else {
+   $phys_bits = ",phys-bits=$conf_val";
+   }
+}
+$cpu_str .= $phys_bits;
+
 return ('-cpu', $cpu_str);
 }
 
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 qemu-server 3/6] Add helpers to better structure CPU option handling

2020-04-07 Thread Stefan Reiter
To avoid hardcoding even more CPU-flag related things for custom CPU
models, introduce a dynamic approach to resolving flags.

resolve_cpu_flags takes a list of hashes (as documented in the
comment) and resolves them to a valid "-cpu" argument without
duplicates. This also helps by providing a reason why specific CPU flags
have been added, and thus allows for useful warning messages should a
flag be overwritten by another.

Signed-off-by: Stefan Reiter 
Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 
---
 PVE/QemuServer/CPUConfig.pm | 64 +
 1 file changed, 64 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 70b96be..9fa6af9 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -320,6 +320,70 @@ sub print_cpu_device {
 return 
"$cpu-x86_64-cpu,id=cpu$id,socket-id=$current_socket,core-id=$current_core,thread-id=0";
 }
 
+# Resolves multiple arrays of hashes representing CPU flags with metadata to a
+# single string in QEMU "-cpu" compatible format. Later arrays have higher
+# priority.
+#
+# Hashes take the following format:
+# {
+# aes => {
+# op => "+", # defaults to "" if undefined
+# reason => "to support AES acceleration", # for override warnings
+# value => "" # needed for kvm=off (value: off) etc...
+# },
+# ...
+# }
+sub resolve_cpu_flags {
+my $flags = {};
+
+for my $hash (@_) {
+   for my $flag_name (keys %$hash) {
+   my $flag = $hash->{$flag_name};
+   my $old_flag = $flags->{$flag_name};
+
+   $flag->{op} //= "";
+   $flag->{reason} //= "unknown origin";
+
+   if ($old_flag) {
+   my $value_changed = (defined($flag->{value}) != 
defined($old_flag->{value})) ||
+   (defined($flag->{value}) && $flag->{value} 
ne $old_flag->{value});
+
+   if ($old_flag->{op} eq $flag->{op} && !$value_changed) {
+   $flags->{$flag_name}->{reason} .= " & $flag->{reason}";
+   next;
+   }
+
+   my $old = print_cpuflag_hash($flag_name, $flags->{$flag_name});
+   my $new = print_cpuflag_hash($flag_name, $flag);
+   warn "warning: CPU flag/setting $new overwrites $old\n";
+   }
+
+   $flags->{$flag_name} = $flag;
+   }
+}
+
+my $flag_str = '';
+# sort for command line stability
+for my $flag_name (sort keys %$flags) {
+   $flag_str .= ',';
+   $flag_str .= $flags->{$flag_name}->{op};
+   $flag_str .= $flag_name;
+   $flag_str .= "=$flags->{$flag_name}->{value}"
+   if $flags->{$flag_name}->{value};
+}
+
+return $flag_str;
+}
+
+sub print_cpuflag_hash {
+my ($flag_name, $flag) = @_;
+my $formatted = "'$flag->{op}$flag_name";
+$formatted .= "=$flag->{value}" if defined($flag->{value});
+$formatted .= "'";
+$formatted .= " ($flag->{reason})" if defined($flag->{reason});
+return $formatted;
+}
+
 # Calculate QEMU's '-cpu' argument from a given VM configuration
 sub get_cpu_options {
 my ($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, 
$gpu_passthrough) = @_;
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v10 qemu-server 6/6] cfg2cmd: add test cases for custom CPU models

2020-04-07 Thread Stefan Reiter
Requires a mock CPU-model config, which is given as a raw string to also
test parsing capabilities. Also tests defaulting behaviour.

Signed-off-by: Stefan Reiter 
Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 
---

Changes in v10:
* Added test for new phys-bits format

(Since it's just a copy of the previous custom model test and nothing else
changed I left the R-by/T-by tags on)

 test/cfg2cmd/custom-cpu-model-defaults.conf   |  8 ++
 .../custom-cpu-model-defaults.conf.cmd| 24 +
 .../custom-cpu-model-host-phys-bits.conf  |  8 ++
 .../custom-cpu-model-host-phys-bits.conf.cmd  | 27 +++
 test/cfg2cmd/custom-cpu-model.conf|  8 ++
 test/cfg2cmd/custom-cpu-model.conf.cmd| 27 +++
 test/run_config2command_tests.pl  | 23 
 7 files changed, 125 insertions(+)
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd

diff --git a/test/cfg2cmd/custom-cpu-model-defaults.conf 
b/test/cfg2cmd/custom-cpu-model-defaults.conf
new file mode 100644
index 000..cdef285
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-defaults.conf
@@ -0,0 +1,8 @@
+# TEST: Check if custom CPU models are resolving defaults correctly
+cores: 3
+cpu: custom-alldefault
+name: customcpu-defaults
+numa: 0
+ostype: l26
+smbios1: uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fce
+sockets: 1
diff --git a/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd 
b/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
new file mode 100644
index 000..ca8fcb0
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
@@ -0,0 +1,24 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name customcpu-defaults \
+  -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server,nowait' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fce' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc unix:/var/run/qemu-server/8006.vnc,password \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 512 \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
+  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+  -machine 'type=pc+pve0'
diff --git a/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf 
b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
new file mode 100644
index 000..a770d93
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
@@ -0,0 +1,8 @@
+# TEST: Check if custom CPU models are resolved correctly
+cores: 3
+cpu: custom-qemu64,phys-bits=host
+name: customcpu
+numa: 0
+ostype: win10
+smbios1: uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fcf
+sockets: 1
diff --git a/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd 
b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
new file mode 100644
index 000..d8fa254
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
@@ -0,0 +1,27 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name customcpu \
+  -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server,nowait' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fcf' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc unix:/var/run/qemu-server/8006.vnc,password \
+  -no-hpet \
+  -cpu 
'athlon,+aes,+avx,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vendor_id=testvend,hv_vpindex,+kvm_pv_eoi,-kvm_pv_unhalt,vendor=AuthenticAMD,host-phys-bits=true'
 \
+  -m 512 \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus

[pve-devel] [PATCH v10 qemu-server 1/6] Include "-cpu" parameter with live-migration

2020-04-07 Thread Stefan Reiter
This is required to support custom CPU models, since the
"cpu-models.conf" file is not versioned, and can be changed while a VM
using a custom model is running. Changing the file in such a state can
lead to a different "-cpu" argument on the receiving side.

This patch fixes this by passing the entire "-cpu" option (extracted
from /proc/.../cmdline) as a "qm start" parameter. Note that this is
only done if the VM to migrate is using a custom model (which we can
check just fine, since the .conf *is* versioned with pending
changes), thus not breaking any live-migration directionality.

Signed-off-by: Stefan Reiter 
---

Changes in v10:
* Introduce and use PVE::QemuServer::CPUConfig::get_cpu_from_running_vm helper

 PVE/API2/Qemu.pm|  7 +++
 PVE/QemuMigrate.pm  | 15 +++
 PVE/QemuServer.pm   | 13 ++---
 PVE/QemuServer/CPUConfig.pm | 14 ++
 4 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 6fad972..9e453d2 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2024,6 +2024,11 @@ __PACKAGE__->register_method({
optional => 1,
},
machine => get_standard_option('pve-qemu-machine'),
+   'force-cpu' => {
+   description => "Override QEMU's -cpu argument with the given 
string.",
+   type => 'string',
+   optional => 1,
+   },
targetstorage => {
description => "Target storage for the migration. (Can be '1' 
to use the same storage id as on the source node.)",
type => 'string',
@@ -2052,6 +2057,7 @@ __PACKAGE__->register_method({
my $timeout = extract_param($param, 'timeout');
 
my $machine = extract_param($param, 'machine');
+   my $force_cpu = extract_param($param, 'force-cpu');
 
my $get_root_param = sub {
my $value = extract_param($param, $_[0]);
@@ -2129,6 +2135,7 @@ __PACKAGE__->register_method({
skiplock => $skiplock,
forcemachine => $machine,
timeout => $timeout,
+   forcecpu => $force_cpu,
};
 
PVE::QemuServer::vm_start($storecfg, $vmid, $params, 
$migrate_opts);
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e954eca..c12b2b5 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -17,6 +17,7 @@ use PVE::ReplicationState;
 use PVE::Storage;
 use PVE::Tools;
 
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers qw(min_version);
 use PVE::QemuServer::Machine;
@@ -227,7 +228,17 @@ sub prepare {
 
$self->{forcemachine} = 
PVE::QemuServer::Machine::qemu_machine_pxe($vmid, $conf);
 
+   # To support custom CPU types, we keep QEMU's "-cpu" parameter intact.
+   # Since the parameter itself contains no reference to a custom model,
+   # this makes migration independent of changes to "cpu-models.conf".
+   if ($conf->{cpu}) {
+   my $cpuconf = 
PVE::QemuServer::CPUConfig::parse_cpu_conf_basic($conf->{cpu});
+   if ($cpuconf && 
PVE::QemuServer::CPUConfig::is_custom_model($cpuconf->{cputype})) {
+   $self->{forcecpu} = 
PVE::QemuServer::CPUConfig::get_cpu_from_running_vm($pid);
+   }
+   }
 }
+
 my $loc_res = PVE::QemuServer::check_local_resources($conf, 1);
 if (scalar @$loc_res) {
if ($self->{running} || !$self->{opts}->{force}) {
@@ -664,6 +675,10 @@ sub phase2 {
push @$cmd, '--machine', $self->{forcemachine};
 }
 
+if ($self->{forcecpu}) {
+   push @$cmd, '--force-cpu', $self->{forcecpu};
+}
+
 if ($self->{online_local_volumes}) {
push @$cmd, '--targetstorage', ($self->{opts}->{targetstorage} // '1');
 }
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 510a995..a0f8429 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2911,7 +2911,7 @@ sub query_understood_cpu_flags {
 }
 
 sub config_to_command {
-my ($storecfg, $vmid, $conf, $defaults, $forcemachine) = @_;
+my ($storecfg, $vmid, $conf, $defaults, $forcemachine, $forcecpu) = @_;
 
 my $cmd = [];
 my $globalFlags = [];
@@ -3317,7 +3317,12 @@ sub config_to_command {
push @$rtcFlags, 'base=localtime';
 }
 
-push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, 
$machine_version, $winversion, $gpu_passthrough);
+if ($forcecpu) {
+   push @$cmd, '-cpu', $forcecpu;
+} else {
+   push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off,
+   $machine_version, $winversion, $gpu_passthrough);
+}
 
 PVE::QemuServer::Memory::config($conf, $vmid, $sockets, $cores, $defaults, 
$hotplug_features, $cmd);
 
@@ -4800,6 +4805,7 @@ sub vm_start {

[pve-devel] [PATCH v10 0/6] Add basics for custom CPU models

2020-04-07 Thread Stefan Reiter
Based on the RFC and following on- and off-list discussion about custom CPU
models [0].

In essence, this revised patch allows a user to specify custom CPU models in
/etc/pve/cpu-models.conf (section-config style [1]), where VMs using that CPU
model inherit details from the definition. This removes any fragile
"auto-magical" CPU flag detection, while still giving the user a way to create
VMs with the best possible subset of CPU features maintaining live-migration
compatibility.

Includes the infrastructure for broadcasting supported CPU flags for each
cluster-node via the key-value store - this is not necessary for the
custom-cpu feature in particular, but I think could prove useful for
implementing the GUI part (e.g. show the user which flags are supported on which
nodes).

v9 -> v10:
* rebase on master
* add helper for running QEMU CPU flag reading
* improved 'phys-bits' format and according test case

v8 -> v9:
* rebase remaining patches on master

v7 -> v8:
* rebase on master, fix tests now using +pve0
* fixed nits noted by Thomas
* moved live-migration/snapshot patches forward to avoid temporary breakage
* fix CPU hotplug with custom CPUs
* guard mkdir with check_cfs_is_mounted and also run before write
* fix rebase-error in cfg2cmd tests (getaddrinfo_all)
* dropped applied patches


< see [2] for older history >


[0]: https://pve.proxmox.com/pipermail/pve-devel/2019-July/038268.html
[1]: e.g.:
cpu-model: custom-cpu-name
host-phys-bits 1
flags +aes;+avx;+avx2
reported-model kvm64
[2] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041278.html


qemu-server: Stefan Reiter (6):
  Include "-cpu" parameter with live-migration
  Include "-cpu" parameter with snapshots/suspend
  Add helpers to better structure CPU option handling
  Rework get_cpu_options and allow custom CPU models
  fix #2318: allow phys-bits CPU setting
  cfg2cmd: add test cases for custom CPU models

 PVE/API2/Qemu.pm  |   8 +
 PVE/QemuConfig.pm |  26 +-
 PVE/QemuMigrate.pm|  15 +
 PVE/QemuServer.pm |  42 ++-
 PVE/QemuServer/CPUConfig.pm   | 310 +++---
 test/cfg2cmd/custom-cpu-model-defaults.conf   |   8 +
 .../custom-cpu-model-defaults.conf.cmd|  24 ++
 .../custom-cpu-model-host-phys-bits.conf  |   8 +
 .../custom-cpu-model-host-phys-bits.conf.cmd  |  27 ++
 test/cfg2cmd/custom-cpu-model.conf|   8 +
 test/cfg2cmd/custom-cpu-model.conf.cmd|  27 ++
 test/cfg2cmd/efi-raw-old.conf.cmd |   2 +-
 test/cfg2cmd/efi-raw.conf.cmd |   2 +-
 test/cfg2cmd/i440fx-win10-hostpci.conf.cmd|   2 +-
 test/cfg2cmd/minimal-defaults.conf.cmd|   2 +-
 test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
 test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/simple1.conf.cmd |   2 +-
 test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
 test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
 test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
 test/cfg2cmd/spice-win.conf.cmd   |   2 +-
 test/run_config2command_tests.pl  |  23 ++
 25 files changed, 471 insertions(+), 81 deletions(-)
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-host-phys-bits.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd

-- 
2.26.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v9 qemu-server 2/6] Include "-cpu" parameter with snapshots/suspend

2020-04-07 Thread Stefan Reiter

On 07/04/2020 09:04, Fabian Ebner wrote:

On 06.04.20 15:01, Stefan Reiter wrote:

On 06/04/2020 14:31, Fabian Ebner wrote:

On 26.03.20 16:13, Stefan Reiter wrote:

Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.

Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.

No functional change with non-custom CPU types intended.



The changes apply to all CPU types, but that's not a bad thing. If 
one takes a snapshot, restarts the VM with a different CPU type and 
does a rollback, it'll use the CPU at the time of the snapshot even 
if no custom models are involved.




Was that a thing before? It should have worked that way anyhow, since 
the 'cpu' setting was stored in the snapshot's section in the config.




Ah, you're right. As long as the defaults for the standard models don't 
change, vm_start will re-generate the same -cpu option when no custom 
model is involved.




The default models can also be versioned with 'runningmachine', so for 
non-custom cpus this was always safe.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v9 qemu-server 5/6] fix #2318: allow phys-bits and host-phys-bits CPU settings

2020-04-06 Thread Stefan Reiter

On 06/04/2020 14:34, Fabian Ebner wrote:

On 26.03.20 16:13, Stefan Reiter wrote:

Can be specified for a particular VM or via a custom CPU model (VM takes
precedence).

QEMU's default limit only allows up to 1TB of RAM per VM. Increasing the
physical address bits available to a VM can fix this.

Signed-off-by: Stefan Reiter 
---
  PVE/QemuServer/CPUConfig.pm | 24 
  1 file changed, 24 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index fa09f4b..2b2529d 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -149,6 +149,19 @@ my $cpu_fmt = {
  pattern => qr/$cpu_flag_any_re(;$cpu_flag_any_re)*/,
  optional => 1,
  },
+    'phys-bits' => {
+    type => 'integer',
+    minimum => 8,
+    maximum => 64,
+    description => "The physical memory address bits that are 
reported to the guest OS. Should be smaller or equal to the host's.",

+    optional => 1,
+    },
+    'host-phys-bits' => {
+    type => 'boolean',
+    default => 0,
+    description => "Whether to report the host's physical memory 
address bits. Overrides 'phys-bits' when set.",


Is it better to die when both are set, so that a user gets informed 
about the clashing options?




Actually, how do you feel about making this a single 'string' option 
'phys-bits' and allowing /^(host|\d{1,2})$/ ? Would mean that min/max(?) 
int values had to be checked manually, but I think it would make it 
clearer for the user.



+    optional => 1,
+    },
  };
  # $cpu_fmt describes both the CPU config passed as part of a VM 
config, as well

@@ -472,6 +485,17 @@ sub get_cpu_options {
  $cpu_str .= resolve_cpu_flags($pve_flags, $hv_flags, 
$custom_cputype_flags,

    $vm_flags, $pve_forced_flags);
+    my $phys_bits = '';
+    foreach my $conf ($custom_cpu, $cpu) {
+    next if !defined($conf);
+    if ($conf->{'host-phys-bits'}) {
+    $phys_bits = ",host-phys-bits=true";
+    } elsif ($conf->{'phys-bits'}) {
+    $phys_bits = ",phys-bits=$conf->{'phys-bits'}";
+    }
+    }
+    $cpu_str .= $phys_bits;
+
  return ('-cpu', $cpu_str);
  }



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v9 qemu-server 2/6] Include "-cpu" parameter with snapshots/suspend

2020-04-06 Thread Stefan Reiter

On 06/04/2020 14:31, Fabian Ebner wrote:

On 26.03.20 16:13, Stefan Reiter wrote:

Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.

Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.

No functional change with non-custom CPU types intended.



The changes apply to all CPU types, but that's not a bad thing. If one 
takes a snapshot, restarts the VM with a different CPU type and does a 
rollback, it'll use the CPU at the time of the snapshot even if no 
custom models are involved.




Was that a thing before? It should have worked that way anyhow, since 
the 'cpu' setting was stored in the snapshot's section in the config.



Signed-off-by: Stefan Reiter 
---
  PVE/API2/Qemu.pm  |  1 +
  PVE/QemuConfig.pm | 34 ++
  PVE/QemuServer.pm | 28 
  3 files changed, 47 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 80fd7ea..2c7e998 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1129,6 +1129,7 @@ my $update_vm_api  = sub {
  push @delete, 'lock'; # this is the real deal to write it out
  }
  push @delete, 'runningmachine' if $conf->{runningmachine};
+    push @delete, 'runningcpu' if $conf->{runningcpu};
  }
  PVE::QemuConfig->check_lock($conf) if !$skiplock;
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 8d03774..386223d 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
  use PVE::AbstractConfig;
  use PVE::INotify;
+use PVE::QemuServer::CPUConfig;
  use PVE::QemuServer::Drive;
  use PVE::QemuServer::Helpers;
  use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -155,15 +156,26 @@ sub __snapshot_save_vmstate {
  my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, 
$vmid, 'raw', $name, $size*1024);
  my $runningmachine = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);

-    if ($suspend) {
-    $conf->{vmstate} = $statefile;
-    $conf->{runningmachine} = $runningmachine;
-    } else {
-    my $snap = $conf->{snapshots}->{$snapname};
-    $snap->{vmstate} = $statefile;
-    $snap->{runningmachine} = $runningmachine;
+    # get current QEMU -cpu argument
+    my $runningcpu;
+    if (my $pid = PVE::QemuServer::check_running($vmid)) {
+    my $cmdline = PVE::QemuServer::Helpers::parse_cmdline($pid);
+    die "could not read commandline of running machine\n"
+    if !$cmdline->{cpu}->{value};
+
+    # sanitize and untaint value
+    $cmdline->{cpu}->{value} =~ 
$PVE::QemuServer::CPUConfig::qemu_cmdline_cpu_re;

+    $runningcpu = $1;
+    }
+
+    if (!$suspend) {
+    $conf = $conf->{snapshots}->{$snapname};
  }
+    $conf->{vmstate} = $statefile;
+    $conf->{runningmachine} = $runningmachine;
+    $conf->{runningcpu} = $runningcpu;
+
  return $statefile;
  }
@@ -309,6 +321,11 @@ sub __snapshot_rollback_hook {
  if (defined($conf->{runningmachine})) {
  $data->{forcemachine} = $conf->{runningmachine};
  delete $conf->{runningmachine};
+
+    # runningcpu is newer than runningmachine, so assume it only 
exists

+    # here, if at all
+    $data->{forcecpu} = delete $conf->{runningcpu}
+    if defined($conf->{runningcpu});
  } else {
  # Note: old code did not store 'machine', so we try to be smart
  # and guess the snapshot was generated with kvm 1.4 
(pc-i440fx-1.4).

@@ -361,7 +378,8 @@ sub __snapshot_rollback_vm_start {
  my ($class, $vmid, $vmstate, $data) = @_;
  my $storecfg = PVE::Storage::config();
-    PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, 
undef, undef, $data->{forcemachine});
+    PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, 
undef, undef,
+    $data->{forcemachine}, undef, undef, undef, undef, undef, undef, 
$data->{forcecpu});

  }
  sub __snapshot_rollback_get_unused {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 98c2a9a..70a5234 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -567,8 +567,15 @@ EODESCR
  optional => 1,
  }),
  runningmachine => get_standard_option('pve-qemu-machine', {
-    description => "Specifies the Qemu machine type of the running 
vm. This is used internally for snapshots.",
+    description => "Specifies the QEMU machine type of the running 
vm. This is used internally for snapshots.",

  }),
+    runningcpu => {
+    description => "Specifies the QEMU '-cpu' parameter of the 
running vm. This is used internally for snapshots.",

+    optional => 1,
+    type => 'string',
+    pattern => $PVE::QemuServer::CPUConfig::

Re: [pve-devel] [PATCH v9 0/6] Add basics for custom CPU models

2020-04-06 Thread Stefan Reiter

On 06/04/2020 14:42, Fabian Ebner wrote:

Hi,

reviewed this series today. It's my first time looking at all the CPU 
related code, so I'm not sure I understood all the details.


The patches work as described (patch #2 even does more ;) and I do have 
2 minor comments/suggestions on the individual patches. And needs to be 
rebased on current master again. With that:


Reviewed-By: Fabian Ebner 
Tested-By: Fabian Ebner 



Thanks for the review! I'll rebase it (I assume the main thing will be 
the recent vm_start refactoring) and send a v10.


@Thomas: In cases like this, should I add the R-by/T-by tags to the v10 
commits myself (for those I don't majorly change anyhow) or will you 
still do that when applying?



On 26.03.20 16:13, Stefan Reiter wrote:
Based on the RFC and following on- and off-list discussion about 
custom CPU

models [0].

In essence, this revised patch allows a user to specify custom CPU 
models in
/etc/pve/cpu-models.conf (section-config style [1]), where VMs using 
that CPU

model inherit details from the definition. This removes any fragile
"auto-magical" CPU flag detection, while still giving the user a way 
to create
VMs with the best possible subset of CPU features maintaining 
live-migration

compatibility.

Includes the infrastructure for broadcasting supported CPU flags for each
cluster-node via the key-value store - this is not necessary for the
custom-cpu feature in particular, but I think could prove useful for
implementing the GUI part (e.g. show the user which flags are 
supported on which

nodes).

v8 -> v9:
* rebase remaining patches on master

v7 -> v8:
* rebase on master, fix tests now using +pve0
* fixed nits noted by Thomas
* moved live-migration/snapshot patches forward to avoid temporary 
breakage

* fix CPU hotplug with custom CPUs
* guard mkdir with check_cfs_is_mounted and also run before write
* fix rebase-error in cfg2cmd tests (getaddrinfo_all)
* dropped applied patches


< see [2] for older history >


[0]: https://pve.proxmox.com/pipermail/pve-devel/2019-July/038268.html
[1]: e.g.:
cpu-model: custom-cpu-name
 host-phys-bits 1
 flags +aes;+avx;+avx2
 reported-model kvm64
[2] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041278.html


qemu-server: Stefan Reiter (6):
   Include "-cpu" parameter with live-migration
   Include "-cpu" parameter with snapshots/suspend
   Add helpers to better structure CPU option handling
   Rework get_cpu_options and allow custom CPU models
   fix #2318: allow phys-bits and host-phys-bits CPU settings
   cfg2cmd: add test cases for custom CPU models

  PVE/API2/Qemu.pm  |   9 +-
  PVE/QemuConfig.pm |  34 ++-
  PVE/QemuMigrate.pm    |  21 ++
  PVE/QemuServer.pm |  41 ++-
  PVE/QemuServer/CPUConfig.pm   | 281 ++
  test/cfg2cmd/custom-cpu-model-defaults.conf   |   8 +
  .../custom-cpu-model-defaults.conf.cmd    |  24 ++
  test/cfg2cmd/custom-cpu-model.conf    |   8 +
  test/cfg2cmd/custom-cpu-model.conf.cmd    |  27 ++
  test/cfg2cmd/i440fx-win10-hostpci.conf.cmd    |   2 +-
  test/cfg2cmd/minimal-defaults.conf.cmd    |   2 +-
  test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
  .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
  test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
  test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
  test/cfg2cmd/simple1.conf.cmd |   2 +-
  test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
  test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
  test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
  test/cfg2cmd/spice-win.conf.cmd   |   2 +-
  test/run_config2command_tests.pl  |  23 ++
  21 files changed, 416 insertions(+), 82 deletions(-)
  create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
  create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
  create mode 100644 test/cfg2cmd/custom-cpu-model.conf
  create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] migration: fix downtime limit auto-increase

2020-04-02 Thread Stefan Reiter
485449e37 ("qmp: use migrate-set-parameters in favor of deprecated options")
changed the initial "migrate_set_downtime" QMP call to the more recent
"migrate-set-parameters", but forgot to do so for the auto-increase code
further below.

Since the units of the two calls don't match, this would have caused the
auto-increase to increase the limit to absurd levels as soon as it kicked
in (ms treated as s).

Update the second call to the new version as well, and while at it remove
the unnecessary "defined()" check for $migrate_downtime, which is always
initialized from the defaults anyway.

Signed-off-by: Stefan Reiter 
---

I am not sure when this code is used at all... I tried slowing the network down
to 56k-modem levels, and migration still completed without triggering it (both
with default migrate_downtime and migrate_downtime set to 1ms).

 PVE/QemuMigrate.pm | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 579be0e..e954eca 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -860,12 +860,10 @@ sub phase2 {
 
 my $migrate_downtime = $defaults->{migrate_downtime};
 $migrate_downtime = $conf->{migrate_downtime} if 
defined($conf->{migrate_downtime});
-if (defined($migrate_downtime)) {
-   # migrate-set-parameters expects limit in ms
-   $migrate_downtime *= 1000;
-   $self->log('info', "migration downtime limit: $migrate_downtime ms");
-   $qemu_migrate_params->{'downtime-limit'} = int($migrate_downtime);
-}
+# migrate-set-parameters expects limit in ms
+$migrate_downtime *= 1000;
+$self->log('info', "migration downtime limit: $migrate_downtime ms");
+$qemu_migrate_params->{'downtime-limit'} = int($migrate_downtime);
 
 # set cachesize to 10% of the total memory
 my $memory =  $conf->{memory} || $defaults->{memory};
@@ -988,11 +986,13 @@ sub phase2 {
if ($downtimecounter > 5) {
$downtimecounter = 0;
$migrate_downtime *= 2;
-   $self->log('info', "migrate_set_downtime: 
$migrate_downtime");
+   $self->log('info', "auto-increased downtime to continue 
migration: $migrate_downtime ms");
eval {
-   mon_cmd($vmid, "migrate_set_downtime", value => 
int($migrate_downtime*100)/100);
+   # migrate-set-parameters does not touch values not
+   # specified, so this only changes downtime-limit
+   mon_cmd($vmid, "migrate-set-parameters", 
'downtime-limit' => int($migrate_downtime));
};
-   $self->log('info', "migrate_set_downtime error: $@") if $@;
+   $self->log('info', "migrate-set-parameters error: $@") if 
$@;
}
 
}
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v9 qemu-server 4/6] Rework get_cpu_options and allow custom CPU models

2020-03-26 Thread Stefan Reiter
If a cputype is custom (check via prefix), try to load options from the
custom CPU model config, and set values accordingly.

While at it, extract currently hardcoded values into seperate sub and add
reasonings.

Since the new flag resolving outputs flags in sorted order for
consistency, adapt the test cases to not break. Only the order is
changed, not which flags are present.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm   | 191 +-
 test/cfg2cmd/i440fx-win10-hostpci.conf.cmd|   2 +-
 test/cfg2cmd/minimal-defaults.conf.cmd|   2 +-
 test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
 test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/simple1.conf.cmd |   2 +-
 test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
 test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
 test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
 test/cfg2cmd/spice-win.conf.cmd   |   2 +-
 12 files changed, 152 insertions(+), 61 deletions(-)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 10375bc..fa09f4b 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -384,99 +384,190 @@ sub print_cpuflag_hash {
 return $formatted;
 }
 
+sub parse_cpuflag_list {
+my ($re, $reason, $flaglist) = @_;
+
+my $res = {};
+return $res if !$flaglist;
+
+foreach my $flag (split(";", $flaglist)) {
+   if ($flag =~ $re) {
+   $res->{$2} = { op => $1, reason => $reason };
+   }
+}
+
+return $res;
+}
+
 # Calculate QEMU's '-cpu' argument from a given VM configuration
 sub get_cpu_options {
 my ($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, 
$gpu_passthrough) = @_;
 
-my $cpuFlags = [];
-my $ostype = $conf->{ostype};
-
-my $cpu = $kvm ? "kvm64" : "qemu64";
+my $cputype = $kvm ? "kvm64" : "qemu64";
 if ($arch eq 'aarch64') {
-   $cpu = 'cortex-a57';
-}
-my $hv_vendor_id;
-if (my $cputype = $conf->{cpu}) {
-   my $cpuconf = PVE::JSONSchema::parse_property_string($cpu_fmt, $cputype)
-   or die "Cannot parse cpu description: $cputype\n";
-   $cpu = $cpuconf->{cputype};
-   $kvm_off = 1 if $cpuconf->{hidden};
-   $hv_vendor_id = $cpuconf->{'hv-vendor-id'};
-
-   if (defined(my $flags = $cpuconf->{flags})) {
-   push @$cpuFlags, split(";", $flags);
-   }
+   $cputype = 'cortex-a57';
 }
 
-push @$cpuFlags , '+lahf_lm' if $cpu eq 'kvm64' && $arch eq 'x86_64';
+my $cpu = {};
+my $custom_cpu;
+my $hv_vendor_id;
+if (my $cpu_prop_str = $conf->{cpu}) {
+   $cpu = parse_vm_cpu_conf($cpu_prop_str)
+   or die "Cannot parse cpu description: $cpu_prop_str\n";
 
-push @$cpuFlags , '-x2apic' if $ostype && $ostype eq 'solaris';
+   $cputype = $cpu->{cputype};
 
-push @$cpuFlags, '+sep' if $cpu eq 'kvm64' || $cpu eq 'kvm32';
+   if (is_custom_model($cputype)) {
+   $custom_cpu = get_custom_model($cputype);
 
-push @$cpuFlags, '-rdtscp' if $cpu =~ m/^Opteron/;
+   $cputype = $custom_cpu->{'reported-model'} //
+   $cpu_fmt->{'reported-model'}->{default};
+   $kvm_off = $custom_cpu->{hidden}
+   if defined($custom_cpu->{hidden});
+   $hv_vendor_id = $custom_cpu->{'hv-vendor-id'};
+   }
 
-if (min_version($machine_version, 2, 3) && $arch eq 'x86_64') {
+   # VM-specific settings override custom CPU config
+   $kvm_off = $cpu->{hidden}
+   if defined($cpu->{hidden});
+   $hv_vendor_id = $cpu->{'hv-vendor-id'}
+   if defined($cpu->{'hv-vendor-id'});
+}
 
-   push @$cpuFlags , '+kvm_pv_unhalt' if $kvm;
-   push @$cpuFlags , '+kvm_pv_eoi' if $kvm;
+my $pve_flags = get_pve_cpu_flags($conf, $kvm, $cputype, $arch,
+ $machine_version);
+
+my $hv_flags = get_hyperv_enlightenments($winversion, $machine_version,
+   $conf->{bios}, $gpu_passthrough, $hv_vendor_id) if $kvm;
+
+my $custom_cputype_flags = parse_cpuflag_list($cpu_flag_any_re,
+   "set by custom CPU model", $custom_cpu->{flags});
+
+my $vm_flags = parse_cpuflag_list($cpu_flag_supported_re,
+   "manually set for VM", $cpu->{flags});
+
+my $pve_forced_flags = {};
+$pve_forced_flags->{'enforce'} = {
+   reason => "error if requested CPU settings not available",
+} if $cputype ne 'host' && $kvm && $arch eq 'x86_64';
+$pve_forced_flags->{'kvm'} = {
+   value => "off",
+   reason => "hide KVM virtualization from guest&qu

[pve-devel] [PATCH v9 qemu-server 5/6] fix #2318: allow phys-bits and host-phys-bits CPU settings

2020-03-26 Thread Stefan Reiter
Can be specified for a particular VM or via a custom CPU model (VM takes
precedence).

QEMU's default limit only allows up to 1TB of RAM per VM. Increasing the
physical address bits available to a VM can fix this.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm | 24 
 1 file changed, 24 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index fa09f4b..2b2529d 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -149,6 +149,19 @@ my $cpu_fmt = {
pattern => qr/$cpu_flag_any_re(;$cpu_flag_any_re)*/,
optional => 1,
 },
+'phys-bits' => {
+   type => 'integer',
+   minimum => 8,
+   maximum => 64,
+   description => "The physical memory address bits that are reported to 
the guest OS. Should be smaller or equal to the host's.",
+   optional => 1,
+},
+'host-phys-bits' => {
+   type => 'boolean',
+   default => 0,
+   description => "Whether to report the host's physical memory address 
bits. Overrides 'phys-bits' when set.",
+   optional => 1,
+},
 };
 
 # $cpu_fmt describes both the CPU config passed as part of a VM config, as well
@@ -472,6 +485,17 @@ sub get_cpu_options {
 $cpu_str .= resolve_cpu_flags($pve_flags, $hv_flags, $custom_cputype_flags,
  $vm_flags, $pve_forced_flags);
 
+my $phys_bits = '';
+foreach my $conf ($custom_cpu, $cpu) {
+   next if !defined($conf);
+   if ($conf->{'host-phys-bits'}) {
+   $phys_bits = ",host-phys-bits=true";
+   } elsif ($conf->{'phys-bits'}) {
+   $phys_bits = ",phys-bits=$conf->{'phys-bits'}";
+   }
+}
+$cpu_str .= $phys_bits;
+
 return ('-cpu', $cpu_str);
 }
 
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v9 qemu-server 2/6] Include "-cpu" parameter with snapshots/suspend

2020-03-26 Thread Stefan Reiter
Just like with live-migration, custom CPU models might change after a
snapshot has been taken (or a VM suspended), which would lead to a
different QEMU invocation on rollback/resume.

Save the "-cpu" argument as a new "runningcpu" option into the VM conf
akin to "runningmachine" and use as override during rollback/resume.

No functional change with non-custom CPU types intended.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm  |  1 +
 PVE/QemuConfig.pm | 34 ++
 PVE/QemuServer.pm | 28 
 3 files changed, 47 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 80fd7ea..2c7e998 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1129,6 +1129,7 @@ my $update_vm_api  = sub {
push @delete, 'lock'; # this is the real deal to write it out
}
push @delete, 'runningmachine' if $conf->{runningmachine};
+   push @delete, 'runningcpu' if $conf->{runningcpu};
}
 
PVE::QemuConfig->check_lock($conf) if !$skiplock;
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 8d03774..386223d 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::AbstractConfig;
 use PVE::INotify;
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -155,15 +156,26 @@ sub __snapshot_save_vmstate {
 my $statefile = PVE::Storage::vdisk_alloc($storecfg, $target, $vmid, 
'raw', $name, $size*1024);
 my $runningmachine = 
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
 
-if ($suspend) {
-   $conf->{vmstate} = $statefile;
-   $conf->{runningmachine} = $runningmachine;
-} else {
-   my $snap = $conf->{snapshots}->{$snapname};
-   $snap->{vmstate} = $statefile;
-   $snap->{runningmachine} = $runningmachine;
+# get current QEMU -cpu argument
+my $runningcpu;
+if (my $pid = PVE::QemuServer::check_running($vmid)) {
+   my $cmdline = PVE::QemuServer::Helpers::parse_cmdline($pid);
+   die "could not read commandline of running machine\n"
+   if !$cmdline->{cpu}->{value};
+
+   # sanitize and untaint value
+   $cmdline->{cpu}->{value} =~ 
$PVE::QemuServer::CPUConfig::qemu_cmdline_cpu_re;
+   $runningcpu = $1;
+}
+
+if (!$suspend) {
+   $conf = $conf->{snapshots}->{$snapname};
 }
 
+$conf->{vmstate} = $statefile;
+$conf->{runningmachine} = $runningmachine;
+$conf->{runningcpu} = $runningcpu;
+
 return $statefile;
 }
 
@@ -309,6 +321,11 @@ sub __snapshot_rollback_hook {
if (defined($conf->{runningmachine})) {
$data->{forcemachine} = $conf->{runningmachine};
delete $conf->{runningmachine};
+
+   # runningcpu is newer than runningmachine, so assume it only exists
+   # here, if at all
+   $data->{forcecpu} = delete $conf->{runningcpu}
+   if defined($conf->{runningcpu});
} else {
# Note: old code did not store 'machine', so we try to be smart
# and guess the snapshot was generated with kvm 1.4 (pc-i440fx-1.4).
@@ -361,7 +378,8 @@ sub __snapshot_rollback_vm_start {
 my ($class, $vmid, $vmstate, $data) = @_;
 
 my $storecfg = PVE::Storage::config();
-PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, undef, 
$data->{forcemachine});
+PVE::QemuServer::vm_start($storecfg, $vmid, $vmstate, undef, undef, undef,
+   $data->{forcemachine}, undef, undef, undef, undef, undef, undef, 
$data->{forcecpu});
 }
 
 sub __snapshot_rollback_get_unused {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 98c2a9a..70a5234 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -567,8 +567,15 @@ EODESCR
optional => 1,
 }),
 runningmachine => get_standard_option('pve-qemu-machine', {
-   description => "Specifies the Qemu machine type of the running vm. This 
is used internally for snapshots.",
+   description => "Specifies the QEMU machine type of the running vm. This 
is used internally for snapshots.",
 }),
+runningcpu => {
+   description => "Specifies the QEMU '-cpu' parameter of the running vm. 
This is used internally for snapshots.",
+   optional => 1,
+   type => 'string',
+   pattern => $PVE::QemuServer::CPUConfig::qemu_cmdline_cpu_re,
+   format_description => 'QEMU -cpu parameter'
+},
 machine => get_standard_option('pve-qemu-machine'),
 arch => {
description => "Virtual processor architecture. Defaults to the host.",
@@ -1948,7 +1955,8 @@ sub json_config_properties {
 my $prop = shift;
 
 foreach my $opt (keys %$confdesc) {
-   next if $

[pve-devel] [PATCH v9 0/6] Add basics for custom CPU models

2020-03-26 Thread Stefan Reiter
Based on the RFC and following on- and off-list discussion about custom CPU
models [0].

In essence, this revised patch allows a user to specify custom CPU models in
/etc/pve/cpu-models.conf (section-config style [1]), where VMs using that CPU
model inherit details from the definition. This removes any fragile
"auto-magical" CPU flag detection, while still giving the user a way to create
VMs with the best possible subset of CPU features maintaining live-migration
compatibility.

Includes the infrastructure for broadcasting supported CPU flags for each
cluster-node via the key-value store - this is not necessary for the
custom-cpu feature in particular, but I think could prove useful for
implementing the GUI part (e.g. show the user which flags are supported on which
nodes).

v8 -> v9:
* rebase remaining patches on master

v7 -> v8:
* rebase on master, fix tests now using +pve0
* fixed nits noted by Thomas
* moved live-migration/snapshot patches forward to avoid temporary breakage
* fix CPU hotplug with custom CPUs
* guard mkdir with check_cfs_is_mounted and also run before write
* fix rebase-error in cfg2cmd tests (getaddrinfo_all)
* dropped applied patches


< see [2] for older history >


[0]: https://pve.proxmox.com/pipermail/pve-devel/2019-July/038268.html
[1]: e.g.:
cpu-model: custom-cpu-name
host-phys-bits 1
flags +aes;+avx;+avx2
reported-model kvm64
[2] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041278.html


qemu-server: Stefan Reiter (6):
  Include "-cpu" parameter with live-migration
  Include "-cpu" parameter with snapshots/suspend
  Add helpers to better structure CPU option handling
  Rework get_cpu_options and allow custom CPU models
  fix #2318: allow phys-bits and host-phys-bits CPU settings
  cfg2cmd: add test cases for custom CPU models

 PVE/API2/Qemu.pm  |   9 +-
 PVE/QemuConfig.pm |  34 ++-
 PVE/QemuMigrate.pm|  21 ++
 PVE/QemuServer.pm |  41 ++-
 PVE/QemuServer/CPUConfig.pm   | 281 ++
 test/cfg2cmd/custom-cpu-model-defaults.conf   |   8 +
 .../custom-cpu-model-defaults.conf.cmd|  24 ++
 test/cfg2cmd/custom-cpu-model.conf|   8 +
 test/cfg2cmd/custom-cpu-model.conf.cmd|  27 ++
 test/cfg2cmd/i440fx-win10-hostpci.conf.cmd|   2 +-
 test/cfg2cmd/minimal-defaults.conf.cmd|   2 +-
 test/cfg2cmd/pinned-version.conf.cmd  |   2 +-
 .../q35-linux-hostpci-multifunction.conf.cmd  |   2 +-
 test/cfg2cmd/q35-linux-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/q35-win10-hostpci.conf.cmd   |   2 +-
 test/cfg2cmd/simple1.conf.cmd |   2 +-
 test/cfg2cmd/spice-enhancments.conf.cmd   |   2 +-
 test/cfg2cmd/spice-linux-4.1.conf.cmd |   2 +-
 test/cfg2cmd/spice-usb3.conf.cmd  |   2 +-
 test/cfg2cmd/spice-win.conf.cmd   |   2 +-
 test/run_config2command_tests.pl  |  23 ++
 21 files changed, 416 insertions(+), 82 deletions(-)
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd

-- 
2.26.0

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v9 qemu-server 1/6] Include "-cpu" parameter with live-migration

2020-03-26 Thread Stefan Reiter
This is required to support custom CPU models, since the
"cpu-models.conf" file is not versioned, and can be changed while a VM
using a custom model is running. Changing the file in such a state can
lead to a different "-cpu" argument on the receiving side.

This patch fixes this by passing the entire "-cpu" option (extracted
from /proc/.../cmdline) as a "qm start" parameter. Note that this is
only done if the VM to migrate is using a custom model (which we can
check just fine, since the .conf *is* versioned with pending
changes), thus not breaking any live-migration directionality.

Signed-off-by: Stefan Reiter 
---
 PVE/API2/Qemu.pm|  8 +++-
 PVE/QemuMigrate.pm  | 21 +
 PVE/QemuServer.pm   | 13 +
 PVE/QemuServer/CPUConfig.pm |  2 ++
 4 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ef8a7c3..80fd7ea 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2024,6 +2024,11 @@ __PACKAGE__->register_method({
optional => 1,
},
machine => get_standard_option('pve-qemu-machine'),
+   'force-cpu' => {
+   description => "Override QEMU's -cpu argument with the given 
string.",
+   type => 'string',
+   optional => 1,
+   },
targetstorage => {
description => "Target storage for the migration. (Can be '1' 
to use the same storage id as on the source node.)",
type => 'string',
@@ -2052,6 +2057,7 @@ __PACKAGE__->register_method({
my $timeout = extract_param($param, 'timeout');
 
my $machine = extract_param($param, 'machine');
+   my $force_cpu = extract_param($param, 'force-cpu');
 
my $get_root_param = sub {
my $value = extract_param($param, $_[0]);
@@ -2113,7 +2119,7 @@ __PACKAGE__->register_method({
 
PVE::QemuServer::vm_start($storecfg, $vmid, $stateuri, 
$skiplock, $migratedfrom, undef, $machine,
  $spice_ticket, $migration_network, 
$migration_type, $targetstorage, $timeout,
- $nbd_protocol_version);
+ $nbd_protocol_version, $force_cpu);
return;
};
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 9cff64d..0604e33 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -17,6 +17,7 @@ use PVE::ReplicationState;
 use PVE::Storage;
 use PVE::Tools;
 
+use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers qw(min_version);
 use PVE::QemuServer::Machine;
@@ -227,7 +228,23 @@ sub prepare {
 
$self->{forcemachine} = 
PVE::QemuServer::Machine::qemu_machine_pxe($vmid, $conf);
 
+   # To support custom CPU types, we keep QEMU's "-cpu" parameter intact.
+   # Since the parameter itself contains no reference to a custom model,
+   # this makes migration independent of changes to "cpu-models.conf".
+   if ($conf->{cpu}) {
+   my $cpuconf = 
PVE::QemuServer::CPUConfig::parse_cpu_conf_basic($conf->{cpu});
+   if ($cpuconf && 
PVE::QemuServer::CPUConfig::is_custom_model($cpuconf->{cputype})) {
+   my $cmdline = PVE::QemuServer::Helpers::parse_cmdline($pid);
+   die "could not read commandline of running machine\n"
+   if !$cmdline->{cpu}->{value};
+
+   # sanitize and untaint value
+   $cmdline->{cpu}->{value} =~ 
$PVE::QemuServer::CPUConfig::qemu_cmdline_cpu_re;
+   $self->{forcecpu} = $1;
+   }
+   }
 }
+
 my $loc_res = PVE::QemuServer::check_local_resources($conf, 1);
 if (scalar @$loc_res) {
if ($self->{running} || !$self->{opts}->{force}) {
@@ -645,6 +662,10 @@ sub phase2 {
push @$cmd, '--machine', $self->{forcemachine};
 }
 
+if ($self->{forcecpu}) {
+   push @$cmd, '--force-cpu', $self->{forcecpu};
+}
+
 if ($self->{online_local_volumes}) {
push @$cmd, '--targetstorage', ($self->{opts}->{targetstorage} // '1');
 }
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a3e3269..98c2a9a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2911,7 +2911,7 @@ sub query_understood_cpu_flags {
 }
 
 sub config_to_command {
-my ($storecfg, $vmid, $conf, $defaults, $forcemachine) = @_;
+my ($storecfg, $vmid, $conf, $defaults, $forcemachine, $force_cpu) = @_;
 
 my $cmd = [];
 my $globalFlags = [];
@@ -3311,7 +3311,12 @@ sub config_to_command {
push @$rtcFlags, 'base=localtime';
 }
 
-push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, 
$machine_version, $winversion, $gpu_passthrough);
+if ($forc

[pve-devel] [PATCH v9 qemu-server 6/6] cfg2cmd: add test cases for custom CPU models

2020-03-26 Thread Stefan Reiter
Requires a mock CPU-model config, which is given as a raw string to also
test parsing capabilities. Also tests defaulting behaviour.

Signed-off-by: Stefan Reiter 
---
 test/cfg2cmd/custom-cpu-model-defaults.conf   |  8 ++
 .../custom-cpu-model-defaults.conf.cmd| 24 +
 test/cfg2cmd/custom-cpu-model.conf|  8 ++
 test/cfg2cmd/custom-cpu-model.conf.cmd| 27 +++
 test/run_config2command_tests.pl  | 23 
 5 files changed, 90 insertions(+)
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf
 create mode 100644 test/cfg2cmd/custom-cpu-model.conf.cmd

diff --git a/test/cfg2cmd/custom-cpu-model-defaults.conf 
b/test/cfg2cmd/custom-cpu-model-defaults.conf
new file mode 100644
index 000..cdef285
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-defaults.conf
@@ -0,0 +1,8 @@
+# TEST: Check if custom CPU models are resolving defaults correctly
+cores: 3
+cpu: custom-alldefault
+name: customcpu-defaults
+numa: 0
+ostype: l26
+smbios1: uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fce
+sockets: 1
diff --git a/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd 
b/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
new file mode 100644
index 000..ca8fcb0
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model-defaults.conf.cmd
@@ -0,0 +1,24 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name customcpu-defaults \
+  -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server,nowait' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fce' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc unix:/var/run/qemu-server/8006.vnc,password \
+  -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+  -m 512 \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
+  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+  -machine 'type=pc+pve0'
diff --git a/test/cfg2cmd/custom-cpu-model.conf 
b/test/cfg2cmd/custom-cpu-model.conf
new file mode 100644
index 000..cc7c60e
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model.conf
@@ -0,0 +1,8 @@
+# TEST: Check if custom CPU models are resolved correctly
+cores: 3
+cpu: custom-qemu64,flags=+virt-ssbd,host-phys-bits=true
+name: customcpu
+numa: 0
+ostype: win10
+smbios1: uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fcf
+sockets: 1
diff --git a/test/cfg2cmd/custom-cpu-model.conf.cmd 
b/test/cfg2cmd/custom-cpu-model.conf.cmd
new file mode 100644
index 000..60b4584
--- /dev/null
+++ b/test/cfg2cmd/custom-cpu-model.conf.cmd
@@ -0,0 +1,27 @@
+/usr/bin/kvm \
+  -id 8006 \
+  -name customcpu \
+  -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server,nowait' \
+  -mon 'chardev=qmp,mode=control' \
+  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
+  -mon 'chardev=qmp-event,mode=control' \
+  -pidfile /var/run/qemu-server/8006.pid \
+  -daemonize \
+  -smbios 'type=1,uuid=2ea3f676-dfa5-11e9-ae82-c721e12f3fcf' \
+  -smp '3,sockets=1,cores=3,maxcpus=3' \
+  -nodefaults \
+  -boot 
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
 \
+  -vnc unix:/var/run/qemu-server/8006.vnc,password \
+  -no-hpet \
+  -cpu 
'athlon,+aes,+avx,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vendor_id=testvend,hv_vpindex,+kvm_pv_eoi,-kvm_pv_unhalt,vendor=AuthenticAMD,+virt-ssbd,host-phys-bits=true'
 \
+  -m 512 \
+  -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+  -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+  -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+  -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+  -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
+  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+  -rtc 'driftfix=slew,base=localtime' \
+  -machine 'type=pc+pve0' \
+  -global 'kvm-pit.lost_tick_policy=discard'
diff --git a/test/run_config2command_tests.pl b/test/run_config2command_tests.pl
index c3e5092..3168176 100755
--- a/test/run_config2command_tests.pl
+++ b/test/run_config2command_tests.pl
@@ -17,6 +17,7 @@ use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuServer::Monitor;
 use PVE

[pve-devel] [PATCH v9 qemu-server 3/6] Add helpers to better structure CPU option handling

2020-03-26 Thread Stefan Reiter
To avoid hardcoding even more CPU-flag related things for custom CPU
models, introduce a dynamic approach to resolving flags.

resolve_cpu_flags takes a list of hashes (as documented in the
comment) and resolves them to a valid "-cpu" argument without
duplicates. This also helps by providing a reason why specific CPU flags
have been added, and thus allows for useful warning messages should a
flag be overwritten by another.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer/CPUConfig.pm | 64 +
 1 file changed, 64 insertions(+)

diff --git a/PVE/QemuServer/CPUConfig.pm b/PVE/QemuServer/CPUConfig.pm
index 4dcc6dc..10375bc 100644
--- a/PVE/QemuServer/CPUConfig.pm
+++ b/PVE/QemuServer/CPUConfig.pm
@@ -320,6 +320,70 @@ sub print_cpu_device {
 return 
"$cpu-x86_64-cpu,id=cpu$id,socket-id=$current_socket,core-id=$current_core,thread-id=0";
 }
 
+# Resolves multiple arrays of hashes representing CPU flags with metadata to a
+# single string in QEMU "-cpu" compatible format. Later arrays have higher
+# priority.
+#
+# Hashes take the following format:
+# {
+# aes => {
+# op => "+", # defaults to "" if undefined
+# reason => "to support AES acceleration", # for override warnings
+# value => "" # needed for kvm=off (value: off) etc...
+# },
+# ...
+# }
+sub resolve_cpu_flags {
+my $flags = {};
+
+for my $hash (@_) {
+   for my $flag_name (keys %$hash) {
+   my $flag = $hash->{$flag_name};
+   my $old_flag = $flags->{$flag_name};
+
+   $flag->{op} //= "";
+   $flag->{reason} //= "unknown origin";
+
+   if ($old_flag) {
+   my $value_changed = (defined($flag->{value}) != 
defined($old_flag->{value})) ||
+   (defined($flag->{value}) && $flag->{value} 
ne $old_flag->{value});
+
+   if ($old_flag->{op} eq $flag->{op} && !$value_changed) {
+   $flags->{$flag_name}->{reason} .= " & $flag->{reason}";
+   next;
+   }
+
+   my $old = print_cpuflag_hash($flag_name, $flags->{$flag_name});
+   my $new = print_cpuflag_hash($flag_name, $flag);
+   warn "warning: CPU flag/setting $new overwrites $old\n";
+   }
+
+   $flags->{$flag_name} = $flag;
+   }
+}
+
+my $flag_str = '';
+# sort for command line stability
+for my $flag_name (sort keys %$flags) {
+   $flag_str .= ',';
+   $flag_str .= $flags->{$flag_name}->{op};
+   $flag_str .= $flag_name;
+   $flag_str .= "=$flags->{$flag_name}->{value}"
+   if $flags->{$flag_name}->{value};
+}
+
+return $flag_str;
+}
+
+sub print_cpuflag_hash {
+my ($flag_name, $flag) = @_;
+my $formatted = "'$flag->{op}$flag_name";
+$formatted .= "=$flag->{value}" if defined($flag->{value});
+$formatted .= "'";
+$formatted .= " ($flag->{reason})" if defined($flag->{reason});
+return $formatted;
+}
+
 # Calculate QEMU's '-cpu' argument from a given VM configuration
 sub get_cpu_options {
 my ($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, 
$gpu_passthrough) = @_;
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 manager 1/3] gui/cluster: fix translation for cluster join button

2020-03-23 Thread Stefan Reiter
New version including the cluster name didn't work in some languages,
e.g. german: "Beitreten 'cluster'" vs the correct "'cluster' beitreten"

Signed-off-by: Stefan Reiter 
---
 www/manager6/dc/ClusterEdit.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index e0fa5d9d..7542104a 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -166,7 +166,7 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
submittxt: function(get) {
let cn = get('info.clusterName');
if (cn) {
-   return `${gettext('Join')} '${cn}'`;
+   return Ext.String.format(gettext('Join {0}'), `'${cn}'`);
}
return gettext('Join');
},
-- 
2.25.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 0/3] Support all 8 corosync3 links in GUI

2020-03-23 Thread Stefan Reiter
v2 -> v3:
* add patch 1 (localization fix)
* implement changes from Dominik's review:
  * use 'let' in new code
  * use references for element lookup
  * some code style nits
  * fix formatting (simpler in general with hbox, and also should work for all
languages now)
  * fix IPv6 address selection

Note: I didn't include any changes to onInputTypeChange as proposed by Dominik
(using bindings instead of set-function calls). I couldn't quite wrap my head
around the ExtJS side of that, so usually I'd like to talk this through in
person, but since Dominik's on vacation and talking face-to-face right now is...
well, not recommended in general, I left it out for now.

This could easily be done in a followup though and wouldn't change the
interface for the user, so I hope that's okay.

RFC -> v2:
* rebased on master
* slight rewording


manager: Stefan Reiter (3):
  gui/cluster: fix translation for cluster join button
  gui/cluster: add CorosyncLinkEdit component to support up to 8 links
  gui/cluster: add structured peerLinks to join info

 www/manager6/Makefile   |   1 +
 www/manager6/dc/Cluster.js  |  13 +-
 www/manager6/dc/ClusterEdit.js  | 194 ++---
 www/manager6/dc/CorosyncLinkEdit.js | 425 
 4 files changed, 534 insertions(+), 99 deletions(-)
 create mode 100644 www/manager6/dc/CorosyncLinkEdit.js

-- 
2.25.2

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v3 manager 2/3] gui/cluster: add CorosyncLinkEdit component to support up to 8 links

2020-03-23 Thread Stefan Reiter
CorosyncLinkEdit is a Panel that contains between one and 8
CorosyncLinkSelectors. These can be added or removed with according
buttons.

Values submitted to the API are calculated by each
ProxmoxNetworkSelector itself. This works because ExtJS searches
recursively through all child components for ones with a value to be
submitted, i.e. the CorosyncLinkEdit and CorosyncLinkSelector components
are not part of data submission at all.

Change ClusterEdit.js to use the new component for cluster join and
create. To make space in layout, move 'password' field to the side
(where the network-selector previously was) and use 'hbox' panel for
horizontal layouting to avoid spacing issues with languages where the
fieldLabel doesn't fit on one line.

Signed-off-by: Stefan Reiter 
---
 www/manager6/Makefile   |   1 +
 www/manager6/dc/ClusterEdit.js  | 187 +++--
 www/manager6/dc/CorosyncLinkEdit.js | 411 
 3 files changed, 505 insertions(+), 94 deletions(-)
 create mode 100644 www/manager6/dc/CorosyncLinkEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 41615430..0f2224af 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -224,6 +224,7 @@ JSSRC=  
\
dc/Cluster.js   \
dc/ClusterEdit.js   \
dc/PermissionView.js\
+   dc/CorosyncLinkEdit.js  \
Workspace.js
 
 lint: ${JSSRC}
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index 7542104a..c18b546b 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -25,24 +25,24 @@ Ext.define('PVE.ClusterCreateWindow', {
name: 'clustername'
},
{
-   xtype: 'proxmoxNetworkSelector',
-   fieldLabel: Ext.String.format(gettext('Link {0}'), 0),
-   emptyText: gettext("Optional, defaults to IP resolved by node's 
hostname"),
-   name: 'link0',
-   autoSelect: false,
-   valueField: 'address',
-   displayField: 'address',
-   skipEmptyText: true
-   }],
-   advancedItems: [{
-   xtype: 'proxmoxNetworkSelector',
-   fieldLabel: Ext.String.format(gettext('Link {0}'), 1),
-   emptyText: gettext("Optional second link for redundancy"),
-   name: 'link1',
-   autoSelect: false,
-   valueField: 'address',
-   displayField: 'address',
-   skipEmptyText: true
+   xtype: 'fieldcontainer',
+   fieldLabel: gettext("Cluster Links"),
+   style: {
+   'padding-top': '5px',
+   },
+   items: [
+   {
+   xtype: 'pveCorosyncLinkEditor',
+   style: {
+   'padding-bottom': '5px',
+   },
+   name: 'links'
+   },
+   {
+   xtype: 'label',
+   text: gettext("Multiple links are used as failover, lower 
numbers have higher priority.")
+   }
+   ]
}]
 }
 });
@@ -149,20 +149,10 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
info: {
fp: '',
ip: '',
-   clusterName: '',
-   ring0Needed: false,
-   ring1Possible: false,
-   ring1Needed: false
+   clusterName: ''
}
},
formulas: {
-   ring0EmptyText: function(get) {
-   if (get('info.ring0Needed')) {
-   return gettext("Cannot use default address safely");
-   } else {
-   return gettext("Default: IP resolved by node's hostname");
-   }
-   },
submittxt: function(get) {
let cn = get('info.clusterName');
if (cn) {
@@ -188,9 +178,6 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
change: 'recomputeSerializedInfo',
enable: 'resetField'
},
-   'proxmoxtextfield[name=ring1_addr]': {
-   enable: 'ring1Needed'
-   },
'textfield': {
disable: 'resetField'
}
@@ -198,47 +185,67 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
resetField: function(field) {
field.reset();
},
-   ring1Needed: function(f) {
-   var vm = this.getViewModel();
-   f.allowBlank = !vm.get('info.ring1Needed');
-   },
onInputTypeChange: function(field, assistedInput) {
-   var vm = this.getViewModel();
+   let linkEditor = this.lookup('linkEditor');
+
+   // this also clears all links
+   linkEditor.setAllowNumberEdit(!assistedInput);
+
if (!assist

[pve-devel] [PATCH v3 manager 3/3] gui/cluster: add structured peerLinks to join info

2020-03-23 Thread Stefan Reiter
Instead of the old 'ring_addr' property (which is kept for
compatibility), we also encode the link numbers into the new peerLinks
structure. This allows us to display which IP is assigned to which link
on the cluster in the join dialog, helping a user identify which link
should receive which interface on the new node.

Signed-off-by: Stefan Reiter 
---
 www/manager6/dc/Cluster.js  | 13 +
 www/manager6/dc/ClusterEdit.js  |  9 +++--
 www/manager6/dc/CorosyncLinkEdit.js | 18 --
 3 files changed, 32 insertions(+), 8 deletions(-)

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index 963da098..3dc1b30c 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -85,14 +85,18 @@ Ext.define('PVE.ClusterAdministration', {
return el.name === data.preferred_node;
});
 
-   var links = [];
-   PVE.Utils.forEachCorosyncLink(nodeinfo,
-   (num, link) => links.push(link));
+   let links = {};
+   let ring_addr = [];
+   PVE.Utils.forEachCorosyncLink(nodeinfo, (num, link) => {
+   links[num] = link;
+   ring_addr.push(link);
+   });
 
vm.set('preferred_node', {
name: data.preferred_node,
addr: nodeinfo.pve_addr,
-   ring_addr: links,
+   peerLinks: links,
+   ring_addr: ring_addr,
fp: nodeinfo.pve_fp
});
},
@@ -116,6 +120,7 @@ Ext.define('PVE.ClusterAdministration', {
joinInfo: {
ipAddress: vm.get('preferred_node.addr'),
fingerprint: vm.get('preferred_node.fp'),
+   peerLinks: vm.get('preferred_node.peerLinks'),
ring_addr: vm.get('preferred_node.ring_addr'),
totem: vm.get('totem')
}
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index c18b546b..7bcb3880 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -223,10 +223,15 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
} else {
let interfaces = joinInfo.totem.interface;
let links = Object.values(interfaces).map(iface => {
+   let linkNumber = iface.linknumber;
+   let peerLink;
+   if (joinInfo.peerLinks) {
+   peerLink = joinInfo.peerLinks[linkNumber];
+   }
return {
-   number: iface.linknumber,
+   number: linkNumber,
value: '',
-   text: '',
+   text: peerLink ? Ext.String.format(gettext("peer's link 
address: {0}"), peerLink) : '',
allowBlank: false
};
});
diff --git a/www/manager6/dc/CorosyncLinkEdit.js 
b/www/manager6/dc/CorosyncLinkEdit.js
index e14ee85f..6dc3a143 100644
--- a/www/manager6/dc/CorosyncLinkEdit.js
+++ b/www/manager6/dc/CorosyncLinkEdit.js
@@ -14,7 +14,7 @@ Ext.define('PVE.form.CorosyncLinkEditorController', {
this.addLink();
 },
 
-addLink: function(number, value, allowBlank) {
+addLink: function(number, value, allowBlank, text) {
let me = this;
let view = me.getView();
let vm = view.getViewModel();
@@ -37,6 +37,7 @@ Ext.define('PVE.form.CorosyncLinkEditorController', {
allowBlankNetwork: allowBlank,
initNumber: number,
initNetwork: value,
+   text: text,
 
// needs to be set here, because we need to update the viewmodel
removeBtnHandler: function() {
@@ -130,6 +131,7 @@ Ext.define('PVE.form.CorosyncLinkSelector', {
 // values
 initNumber: 0,
 initNetwork: '',
+text: '',
 
 layout: 'hbox',
 bodyPadding: 5,
@@ -190,6 +192,17 @@ Ext.define('PVE.form.CorosyncLinkSelector', {
parent.removeBtnHandler();
}
}
+   },
+   {
+   xtype: 'label',
+   margin: '-1px 0 0 5px',
+
+   // for muted effect
+   cls: 'x-form-item-label-default',
+
+   cbind: {
+   text: '{text}'
+   }
}
 ],
 
@@ -327,7 +340,8 @@ Ext.define('PVE.form.CorosyncLinkEditor', {
vm.set('linkCount', 0);
 
Ext.Array.each(links, link => {
-   controller.addLink(link['number'], link['value'], 
link['allowBlank']);
+   controller.addLink(link['number'], link['value'],
+   link['allowBlank'], link['text']);
});
 },
 
-- 
2.25.2


_

Re: [pve-devel] [PATCH v2 manager 1/2] gui/cluster: add CorosyncLinkEdit component to support up to 8 links

2020-03-23 Thread Stefan Reiter

On 19/03/2020 15:06, Dominik Csapak wrote:

overall looks good, and most works, comments inline

the only 'real' issue i found is with ipv6 networks
(they do not get auto added to the links)



Thanks for the review, I'll send a fixed v3 soon!


On 3/17/20 12:11 PM, Stefan Reiter wrote

-    ring1Possible: false,
  ip: '',
  clusterName: ''
  };
-    var totem = {};
  if (!(joinInfo && joinInfo.totem)) {
  field.valid = false;
+    linkEditor.setLinks([]);
+    linkEditor.setInfoText();
  } else {
-    var ring0Needed = false;
-    if (joinInfo.ring_addr !== undefined) {
-    ring0Needed = joinInfo.ring_addr[0] !== joinInfo.ipAddress;
+    var links = [];
+    var interfaces = joinInfo.totem['interface'];


nit: i'd rather use 'let' whenever possible for new code



I used it here to not mix-and-match var and let in existing functions. 
Is this the intent, or should I also change other 'var's over to let then?



  columnB: [
  {
  xtype: 'textfield',
  fieldLabel: gettext('Fingerprint'),
+    style: 'margin-top: -10px',


same as for 'padding-top'
also, why '-10px' ?

also, the whole layout seems kinda wonky (at least with
the language set to german), the password field
is higher up that the address field...



Hadn't tested in other languages... Probably should have ;)

The reason for the '-10px' was to avoid exactly that issue in the 
English layout, where without it some text fields would mysteriously be 
misaligned. I'll try and figure out how to fix this correctly and for 
all localizations.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-qemu] vma_writer: Display more error information

2020-03-19 Thread Stefan Reiter
Realizing that vmastat.errmsg already contains 
"vma_writer_register_stream" and we're not losing any info here:


Reviewed-by: Stefan Reiter 

On 19/03/2020 11:47, Dominic Jäger wrote:

Also print the reason why the function vma_writer_register_stream failed to
help debug errors like in [0].

[0] 
https://forum.proxmox.com/threads/backup-error-vma_writer_register_stream-drive-scsi0-failed-pve-6-1-7.65925/

Signed-off-by: Dominic Jäger 
---
  .../0029-PVE-Backup-add-vma-backup-format-code.patch  | 11 ++-
  1 file changed, 6 insertions(+), 5 deletions(-)

diff --git 
a/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch 
b/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
index c7a4275..0861a3f 100644
--- a/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
+++ b/debian/patches/pve/0029-PVE-Backup-add-vma-backup-format-code.patch
@@ -8,9 +8,9 @@ Subject: [PATCH 29/32] PVE-Backup: add vma backup format code
   Makefile.objs |   1 +
   vma-reader.c  | 857 ++
   vma-writer.c  | 771 +
- vma.c | 837 
+ vma.c | 838 
   vma.h | 150 +
- 6 files changed, 2618 insertions(+), 1 deletion(-)
+ 6 files changed, 2619 insertions(+), 1 deletion(-)
   create mode 100644 vma-reader.c
   create mode 100644 vma-writer.c
   create mode 100644 vma.c
@@ -1694,7 +1694,7 @@ new file mode 100644
  index 00..a82752448a
  --- /dev/null
  +++ b/vma.c
-@@ -0,0 +1,837 @@
+@@ -0,0 +1,838 @@
  +/*
  + * VMA: Virtual Machine Archive
  + *
@@ -2330,6 +2330,7 @@ index 00..a82752448a
  +}
  +
  +int devcount = 0;
++VmaStatus vmastat;
  +while (optind < argc) {
  +const char *path = argv[optind++];
  +char *devname = NULL;
@@ -2347,7 +2348,8 @@ index 00..a82752448a
  +int dev_id = vma_writer_register_stream(vmaw, devname, size);
  +if (dev_id <= 0) {
  +unlink(archivename);
-+g_error("vma_writer_register_stream '%s' failed", devname);
++vma_writer_get_status(vmaw, );
++g_error("error for device '%s': %s", devname, vmastat.errmsg);
  +}
  +
  +BackupJob *job = g_new0(BackupJob, 1);
@@ -2360,7 +2362,6 @@ index 00..a82752448a
  +qemu_coroutine_enter(co);
  +}
  +
-+VmaStatus vmastat;
  +int percent = 0;
  +int last_percent = -1;
  +



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu/qemu-server 0/4] live-migration with replicated disks

2020-03-19 Thread Stefan Reiter

Seems to work as advertised :) Thus, for the series:

Tested-by: Stefan Reiter 

On 17/03/2020 08:55, Fabian Grünbichler wrote:

I recently picked up and finished some work-in-progress patches for
adding bitmap support to drive-mirror (it got added to backup block jobs
in 4.0, with plenty of fixes in 4.1 and 4.2) and submitted them
upstream.

IMHO this is in a shape now where we can include it, but I'd also be
fine with hiding it behind an extra flag for now that we later switch to
default/always on.

qemu:

Fabian Grünbichler (1):
   add bitmap drive-mirror patches

  ...d-support-for-sync-bitmap-mode-never.patch |  443 +++
  ...-support-for-conditional-and-always-.patch |   83 +
  ...check-for-bitmap-mode-without-bitmap.patch |   33 +
  ...-to-bdrv_dirty_bitmap_merge_internal.patch |   45 +
  ...8-iotests-add-test-for-bitmap-mirror.patch | 3447 +
  .../0039-mirror-move-some-checks-to-qmp.patch |  275 ++
  debian/patches/series |6 +
  7 files changed, 4332 insertions(+)
  create mode 100644 
debian/patches/pve/0034-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
  create mode 100644 
debian/patches/pve/0035-drive-mirror-add-support-for-conditional-and-always-.patch
  create mode 100644 
debian/patches/pve/0036-mirror-add-check-for-bitmap-mode-without-bitmap.patch
  create mode 100644 
debian/patches/pve/0037-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
  create mode 100644 
debian/patches/pve/0038-iotests-add-test-for-bitmap-mirror.patch
  create mode 100644 
debian/patches/pve/0039-mirror-move-some-checks-to-qmp.patch

qemu-server:

Fabian Grünbichler (3):
   drive-mirror: add support for incremental sync
   migrate: add replication info to disk overview
   migrate: add live-migration of replicated disks

  PVE/QemuMigrate.pm | 63 +-
  PVE/QemuServer.pm  | 15 ++-
  2 files changed, 71 insertions(+), 7 deletions(-)



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] version_guard: early out when major/minor version is high enough

2020-03-19 Thread Stefan Reiter
E.g.: If a feature requires 4.1+pveN and we're using machine version 4.2
we don't need to increase the pve version to N (4.2+pve0 is enough).

We check this by doing a min_version call against a non-existant higher
pve-version for the major/minor tuple we want to test for, which can
only work if the major/minor alone is high enough.

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e022141..a5b000e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2962,6 +2962,8 @@ sub config_to_command {
 my $version_guard = sub {
my ($major, $minor, $pve) = @_;
return 0 if !min_version($machine_version, $major, $minor, $pve);
+   my $max_pve = 
PVE::QemuServer::Machine::get_pve_version("$major.$minor");
+   return 1 if min_version($machine_version, $major, $minor, $max_pve+1);
$required_pve_version = $pve if $pve && $pve > $required_pve_version;
return 1;
 };
-- 
2.25.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 1/2] Disable memory hotplugging for custom NUMA topologies

2020-03-18 Thread Stefan Reiter
This cannot work, since we adjust the 'memory' property of the VM config
on hotplugging, but then the user-defined NUMA topology won't match for
the next start attempt.

Check needs to happen here, since it otherwise fails early with "total
memory for NUMA nodes must be equal to vm static memory".

With this change the error message reflects what is actually happening
and doesn't allow VMs with exactly 1GB of RAM either.

Signed-off-by: Stefan Reiter 
---

Came up after investigating:
https://forum.proxmox.com/threads/task-error-total-memory-for-numa-nodes-must-be-equal-to-vm-static-memory.67251/

Spent way too much time 'fixing' it before realizing that it can never work
like this anyway...

 PVE/QemuServer/Memory.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index d500b3b..ae9598b 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -225,6 +225,12 @@ sub config {
 if ($hotplug_features->{memory}) {
die "NUMA needs to be enabled for memory hotplug\n" if !$conf->{numa};
die "Total memory is bigger than ${MAX_MEM}MB\n" if $memory > $MAX_MEM;
+
+   for (my $i = 0; $i < $MAX_NUMA; $i++) {
+   die "cannot enable memory hotplugging with custom NUMA topology\n"
+   if $conf->{"numa$i"};
+   }
+
my $sockets = 1;
$sockets = $conf->{sockets} if $conf->{sockets};
 
-- 
2.25.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server 2/2] Die on misaligned memory for hotplugging

2020-03-18 Thread Stefan Reiter
...instead of booting with an invalid config once and then silently
changing the memory size for consequent VM starts.

Signed-off-by: Stefan Reiter 
---

This confused me for a bit, I don't think that's very nice behaviour as it
stands.

 PVE/QemuServer/Memory.pm | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index ae9598b..b7cf5d5 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -321,11 +321,8 @@ sub config {
push @$cmd, "-object" , $mem_object;
push @$cmd, "-device", 
"pc-dimm,id=$name,memdev=mem-$name,node=$numanode";
 
-   #if dimm_memory is not aligned to dimm map
-   if($current_size > $memory) {
-$conf->{memory} = $current_size;
-PVE::QemuConfig->write_config($vmid, $conf);
-   }
+   die "memory size ($memory) must be aligned to $dimm_size for 
hotplugging\n"
+   if $current_size > $memory;
});
 }
 }
-- 
2.25.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu-server 4/4] add unix socket support for NBD storage migration

2020-03-18 Thread Stefan Reiter

On 17/03/2020 20:56, Mira Limbeck wrote:

The reuse of the tunnel, which we're opening to communicate with the target
node and to forward the unix socket for the state migration, for the NBD unix
socket requires adding support for an array of sockets to forward, not just a
single one. We also have to change the $sock_addr variable to an array
for the cleanup of the socket file as SSH does not remove the file.

To communicate to the target node the support of unix sockets for NBD
storage migration, we're specifying an nbd_protocol_version which is set
to 1. This version is then passed to the target node via STDIN. Because
we don't want to be dependent on the order of arguments being passed
via STDIN, we also prefix the spice ticket with 'spice_ticket: '. The
target side handles both the spice ticket and the nbd protocol version
with a fallback for old source nodes passing the spice ticket without a
prefix.
All arguments are line based and require a newline in between.

When the NBD server on the target node is started with a unix socket, we
get a different line containing all the information required to start
the drive-mirror. This contains the unix socket path used on the target node
which we require for forwarding and cleanup.

Signed-off-by: Mira Limbeck 
---
v2:
  - added 'spice_ticket: (...)' to input if $spice_ticket is defined
  - added waiting for all sockets that are used in the tunnel

  PVE/QemuMigrate.pm | 52 +-
  1 file changed, 42 insertions(+), 10 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 10c0ff2..50ebd77 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -142,7 +142,10 @@ sub write_tunnel {
  sub fork_tunnel {
  my ($self, $tunnel_addr) = @_;
  
-my @localtunnelinfo = defined($tunnel_addr) ? ('-L' , $tunnel_addr ) : ();

+my @localtunnelinfo = ();
+foreach my $addr (@$tunnel_addr) {
+   push @localtunnelinfo, '-L', $addr;
+}
  
  my $cmd = [@{$self->{rem_ssh}}, '-o ExitOnForwardFailure=yes', @localtunnelinfo, '/usr/sbin/qm', 'mtunnel' ];
  
@@ -184,7 +187,7 @@ sub finish_tunnel {
  
  if ($tunnel->{sock_addr}) {

# ssh does not clean up on local host
-   my $cmd = ['rm', '-f', $tunnel->{sock_addr}]; #
+   my $cmd = ['rm', '-f', @{$tunnel->{sock_addr}}]; #
PVE::Tools::run_command($cmd);
  
  	# .. and just to be sure check on remote side

@@ -594,10 +597,16 @@ sub phase2 {
  }
  
  my $spice_port;

+my $tunnel_addr = [];
+my $sock_addr = [];
+# version > 0 for unix socket support
+my $nbd_protocol_version = 1;
+my $input = "nbd_protocol_version: $nbd_protocol_version\n";
+$input .= "spice_ticket: $spice_ticket\n" if $spice_ticket;


I know it's already been applied, but this breaks new->old migration 
(with SPICE) for no reason, doesn't it? I know we don't always support 
it, but I don't see why we need the break here..


I.e. if we just send the spice-ticket w/o a prefix an old node will be 
perfectly happy and just send us back the TCP migration URI (and with 
the fallback in place a new node will also be happy), but if we do 
prefix it the remote will pick up the entire "spice_ticket: xxx" as the 
ticket.


  
  # Note: We try to keep $spice_ticket secret (do not pass via command line parameter)

  # instead we pipe it through STDIN
-my $exitcode = PVE::Tools::run_command($cmd, input => $spice_ticket, outfunc 
=> sub {
+my $exitcode = PVE::Tools::run_command($cmd, input => $input, outfunc => 
sub {
my $line = shift;
  
  	if ($line =~ m/^migration listens on tcp:(localhost|[\d\.]+|\[[\d\.:a-fA-F]+\]):(\d+)$/) {

@@ -626,7 +635,18 @@ sub phase2 {
  
  	$self->{target_drive}->{$targetdrive}->{drivestr} = $drivestr;

$self->{target_drive}->{$targetdrive}->{nbd_uri} = $nbd_uri;
+   } elsif ($line =~ m!^storage migration listens on 
nbd:unix:(/run/qemu-server/(\d+)_nbd\.migrate):exportname=(\S+) volume:(\S+)$!) 
{
+   my $drivestr = $4;
+   die "Destination UNIX socket's VMID does not match source VMID" if 
$vmid ne $2;
+   my $nbd_unix_addr = $1;
+   my $nbd_uri = "nbd:unix:$nbd_unix_addr:exportname=$3";
+   my $targetdrive = $3;
+   $targetdrive =~ s/drive-//g;
  
+	$self->{target_drive}->{$targetdrive}->{drivestr} = $drivestr;

+   $self->{target_drive}->{$targetdrive}->{nbd_uri} = $nbd_uri;
+   push @$tunnel_addr, "$nbd_unix_addr:$nbd_unix_addr";
+   push @$sock_addr, $nbd_unix_addr;
} elsif ($line =~ m/^QEMU: (.*)$/) {
$self->log('info', "[$self->{node}] $1\n");
}
@@ -645,20 +665,31 @@ sub phase2 {
  
  	if ($ruri =~ /^unix:/) {

unlink $raddr;
-   $self->{tunnel} = $self->fork_tunnel("$raddr:$raddr");
-   $self->{tunnel}->{sock_addr} = $raddr;
+   push @$tunnel_addr, "$raddr:$raddr";
+   $self->{tunnel} = 

Re: [pve-devel] [PATCH v2 qemu-server 2/4] add NBD server unix socket support in vm_start

2020-03-18 Thread Stefan Reiter

On 17/03/2020 20:56, Mira Limbeck wrote:

As the NBD server spawned by qemu can only listen on a single socket,
we're dependent on a version being passed to vm_start that indicates
which protocol can be used, TCP or Unix, by the source node.

The change in socket type (TCP to Unix) comes with a different URI. For
unix sockets it has the form: 'nbd:unix::exportname='.

Signed-off-by: Mira Limbeck 
---
v2:
  - added fallback to 0 if nbd_protocol_version is undefined
  - some cleanup regarding variables only used in one case

  PVE/QemuServer.pm | 34 --
  1 file changed, 24 insertions(+), 10 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 429ec05..2ef8bff 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4707,7 +4707,8 @@ sub vmconfig_update_disk {
  
  sub vm_start {

  my ($storecfg, $vmid, $statefile, $skiplock, $migratedfrom, $paused,
-   $forcemachine, $spice_ticket, $migration_network, $migration_type, 
$targetstorage, $timeout) = @_;
+   $forcemachine, $spice_ticket, $migration_network, $migration_type,
+   $targetstorage, $timeout, $nbd_protocol_version) = @_;
  
  PVE::QemuConfig->lock_config($vmid, sub {

my $conf = PVE::QemuConfig->load_config($vmid, $migratedfrom);
@@ -4985,20 +4986,33 @@ sub vm_start {
  
  	#start nbd server for storage migration

if ($targetstorage) {
-   my $nodename = nodename();
-   my $localip = $get_migration_ip->($migration_network, $nodename);
-   my $pfamily = PVE::Tools::get_host_address_family($nodename);
-   my $storage_migrate_port = PVE::Tools::next_migrate_port($pfamily);
-
-   mon_cmd($vmid, "nbd-server-start", addr => { type => 'inet', data => { host => 
"${localip}", port => "${storage_migrate_port}" } } );
+   $nbd_protocol_version //= 0;
+
+   my $migrate_storage_uri;
+   # nbd_protocol_version > 0 for unix socket support
+   if ($nbd_protocol_version > 0 && $migration_type eq 'secure') {
+   my $socket_path = "/run/qemu-server/$vmid\_nbd.migrate";
+   mon_cmd($vmid, "nbd-server-start", addr => { type => 'unix', data 
=> { path => $socket_path } } );
+   $migrate_storage_uri = "nbd:unix:$socket_path";
+   } else {
+   my $nodename = nodename();
+   my $localip = $get_migration_ip->($migration_network, 
$nodename);
+   my $pfamily = PVE::Tools::get_host_address_family($nodename);
+   my $storage_migrate_port = 
PVE::Tools::next_migrate_port($pfamily);
  
-	$localip = "[$localip]" if Net::IP::ip_is_ipv6($localip);

+   mon_cmd($vmid, "nbd-server-start", addr => { type => 'inet', data => { host => 
"${localip}", port => "${storage_migrate_port}" } } );
+   $localip = "[$localip]" if Net::IP::ip_is_ipv6($localip);
+   $migrate_storage_uri = "nbd:${localip}:${storage_migrate_port}";
+   }
  
  	foreach my $opt (sort keys %$local_volumes) {

my $drivestr = $local_volumes->{$opt};
mon_cmd($vmid, "nbd-server-add", device => "drive-$opt", writable 
=> JSON::true );
-   my $migrate_storage_uri = 
"nbd:${localip}:${storage_migrate_port}:exportname=drive-$opt";
-   print "storage migration listens on $migrate_storage_uri 
volume:$drivestr\n";
+   if ($nbd_protocol_version > 0 && $migration_type eq 'secure') {
+   print "storage migration listens on 
$migrate_storage_uri:exportname=drive-$opt volume:$drivestr\n";
+   } else {
+   print "storage migration listens on 
$migrate_storage_uri:exportname=drive-$opt volume:$drivestr\n";
+   }


What's the idea of this if?


}
}
  



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 qemu-server 3/3] migrate: add live-migration of replicated disks

2020-03-17 Thread Stefan Reiter

On 17/03/2020 12:40, Thomas Lamprecht wrote:

On 3/17/20 11:21 AM, Stefan Reiter wrote:

+$local_volumes->{$opt} = $conf->{${opt}};


Does $conf->{${opt}} have too many brackets or is this another arcane perl syntax 
I've yet to discover? (iow. why not just $conf->{$opt} ?)


It's not that arcane, you surely used it sometimes, for example:

* in strings to separate variable from text: "${foo}bar"
* to treat a hashrefs element as hash: keys %{$foo->{bar}}
* to treat a hashrefs element as array: @{$foo->{bar}}

so yes, it isn't useful here and would be nicer to just be $opt, but
${opt} is the same thing :)



I assumed that's what's going on, but with Perl you never know and I'd 
only seen it in strings - the hashref stuff is a bit different IMO.


But thanks for clearing it up :)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 0/2] Support all 8 corosync3 links in GUI

2020-03-17 Thread Stefan Reiter
Since it's been a while, here's a rebased version the two still missing patches
of my previous RFC[0].

Except for a slight rewording of the message in patch 2 it's exactly the same,
so consider this a RESEND more than anything. I think the series might have
gotten lost a bit after parts where applied?

Anyway, still welcome for any feedback ofc.

[0] https://pve.proxmox.com/pipermail/pve-devel/2020-January/041434.html


manager: Stefan Reiter (2):
  gui/cluster: add CorosyncLinkEdit component to support up to 8 links
  gui/cluster: add structured peerLinks to join info

 www/manager6/Makefile   |   1 +
 www/manager6/dc/Cluster.js  |  13 +-
 www/manager6/dc/ClusterEdit.js  | 154 +-
 www/manager6/dc/CorosyncLinkEdit.js | 420 
 4 files changed, 507 insertions(+), 81 deletions(-)
 create mode 100644 www/manager6/dc/CorosyncLinkEdit.js

-- 
2.25.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 manager 2/2] gui/cluster: add structured peerLinks to join info

2020-03-17 Thread Stefan Reiter
Instead of the old 'ring_addr' property (which is kept for
compatibility), we also encode the link numbers into the new peerLinks
structure. This allows us to display which IP is assigned to which link
on the cluster in the join dialog, helping a user identify which link
should receive which interface on the new node.

Signed-off-by: Stefan Reiter 
---
 www/manager6/dc/Cluster.js  | 13 +
 www/manager6/dc/ClusterEdit.js  | 10 --
 www/manager6/dc/CorosyncLinkEdit.js | 18 --
 3 files changed, 33 insertions(+), 8 deletions(-)

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index 963da098..2407ab15 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -85,14 +85,18 @@ Ext.define('PVE.ClusterAdministration', {
return el.name === data.preferred_node;
});
 
-   var links = [];
-   PVE.Utils.forEachCorosyncLink(nodeinfo,
-   (num, link) => links.push(link));
+   var links = {};
+   var ring_addr = [];
+   PVE.Utils.forEachCorosyncLink(nodeinfo, (num, link) => {
+   links[num] = link;
+   ring_addr.push(link);
+   });
 
vm.set('preferred_node', {
name: data.preferred_node,
addr: nodeinfo.pve_addr,
-   ring_addr: links,
+   peerLinks: links,
+   ring_addr: ring_addr,
fp: nodeinfo.pve_fp
});
},
@@ -116,6 +120,7 @@ Ext.define('PVE.ClusterAdministration', {
joinInfo: {
ipAddress: vm.get('preferred_node.addr'),
fingerprint: vm.get('preferred_node.fp'),
+   peerLinks: vm.get('preferred_node.peerLinks'),
ring_addr: vm.get('preferred_node.ring_addr'),
totem: vm.get('totem')
}
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index 0820e4a0..4dd977c6 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -222,10 +222,16 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
var links = [];
var interfaces = joinInfo.totem['interface'];
Ext.Array.each(Object.keys(interfaces), iface => {
+   var linkNumber = interfaces[iface]['linknumber'];
+   var peerLink;
+   if (joinInfo.peerLinks) {
+   peerLink = joinInfo.peerLinks[linkNumber];
+   }
+
links.push({
-   number: interfaces[iface]['linknumber'],
+   number: linkNumber,
value: '',
-   text: '',
+   text: peerLink ? Ext.String.format(gettext("peer's link 
address: {0}"), peerLink) : '',
allowBlank: false
});
});
diff --git a/www/manager6/dc/CorosyncLinkEdit.js 
b/www/manager6/dc/CorosyncLinkEdit.js
index 6fb35a0e..c1492453 100644
--- a/www/manager6/dc/CorosyncLinkEdit.js
+++ b/www/manager6/dc/CorosyncLinkEdit.js
@@ -14,7 +14,7 @@ Ext.define('PVE.form.CorosyncLinkEditorController', {
this.addLink();
 },
 
-addLink: function(number, value, allowBlank) {
+addLink: function(number, value, allowBlank, text) {
let me = this;
let view = me.getView();
let vm = view.getViewModel();
@@ -37,6 +37,7 @@ Ext.define('PVE.form.CorosyncLinkEditorController', {
allowBlankNetwork: allowBlank,
initNumber: number,
initNetwork: value,
+   text: text,
 
// needs to be set here, because we need to update the viewmodel
removeBtnHandler: function() {
@@ -130,6 +131,7 @@ Ext.define('PVE.form.CorosyncLinkSelector', {
 // values
 initNumber: 0,
 initNetwork: '',
+text: '',
 
 layout: 'hbox',
 bodyPadding: 5,
@@ -190,6 +192,17 @@ Ext.define('PVE.form.CorosyncLinkSelector', {
parent.removeBtnHandler();
}
}
+   },
+   {
+   xtype: 'label',
+   margin: '-1px 0 0 5px',
+
+   // for muted effect
+   cls: 'x-form-item-label-default',
+
+   cbind: {
+   text: '{text}'
+   }
}
 ],
 
@@ -327,7 +340,8 @@ Ext.define('PVE.form.CorosyncLinkEditor', {
vm.set('linkCount', 0);
 
Ext.Array.each(links, link => {
-   controller.addLink(link['number'], link['value'], 
link['allowBlank']);
+   controller.addLink(link['number'], link['value'],
+   link['allowBlank'], link['text']);
})

[pve-devel] [PATCH v2 manager 1/2] gui/cluster: add CorosyncLinkEdit component to support up to 8 links

2020-03-17 Thread Stefan Reiter
CorosyncLinkEdit is a Panel that contains between one and 8
CorosyncLinkSelectors. These can be added or removed with according
buttons.

Values submitted to the API are calculated by each
ProxmoxNetworkSelector itself. This works because ExtJS searches
recursively through all child components for ones with a value to be
submitted, i.e. the CorosyncLinkEdit and CorosyncLinkSelector components
are not part of data submission at all.

Change ClusterEdit.js to use the new component for cluster join and
create.

Signed-off-by: Stefan Reiter 
---
 www/manager6/Makefile   |   1 +
 www/manager6/dc/ClusterEdit.js  | 148 +-
 www/manager6/dc/CorosyncLinkEdit.js | 406 
 3 files changed, 478 insertions(+), 77 deletions(-)
 create mode 100644 www/manager6/dc/CorosyncLinkEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 41615430..0f2224af 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -224,6 +224,7 @@ JSSRC=  
\
dc/Cluster.js   \
dc/ClusterEdit.js   \
dc/PermissionView.js\
+   dc/CorosyncLinkEdit.js  \
Workspace.js
 
 lint: ${JSSRC}
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index e0fa5d9d..0820e4a0 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -25,24 +25,20 @@ Ext.define('PVE.ClusterCreateWindow', {
name: 'clustername'
},
{
-   xtype: 'proxmoxNetworkSelector',
-   fieldLabel: Ext.String.format(gettext('Link {0}'), 0),
-   emptyText: gettext("Optional, defaults to IP resolved by node's 
hostname"),
-   name: 'link0',
-   autoSelect: false,
-   valueField: 'address',
-   displayField: 'address',
-   skipEmptyText: true
-   }],
-   advancedItems: [{
-   xtype: 'proxmoxNetworkSelector',
-   fieldLabel: Ext.String.format(gettext('Link {0}'), 1),
-   emptyText: gettext("Optional second link for redundancy"),
-   name: 'link1',
-   autoSelect: false,
-   valueField: 'address',
-   displayField: 'address',
-   skipEmptyText: true
+   xtype: 'fieldcontainer',
+   fieldLabel: gettext("Cluster Links"),
+   style: 'padding-top: 5px',
+   items: [
+   {
+   xtype: 'pveCorosyncLinkEditor',
+   style: 'padding-bottom: 5px',
+   name: 'links'
+   },
+   {
+   xtype: 'label',
+   text: gettext("Multiple links are used as failover, lower 
numbers have higher priority.")
+   }
+   ]
}]
 }
 });
@@ -149,20 +145,10 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
info: {
fp: '',
ip: '',
-   clusterName: '',
-   ring0Needed: false,
-   ring1Possible: false,
-   ring1Needed: false
+   clusterName: ''
}
},
formulas: {
-   ring0EmptyText: function(get) {
-   if (get('info.ring0Needed')) {
-   return gettext("Cannot use default address safely");
-   } else {
-   return gettext("Default: IP resolved by node's hostname");
-   }
-   },
submittxt: function(get) {
let cn = get('info.clusterName');
if (cn) {
@@ -188,9 +174,6 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
change: 'recomputeSerializedInfo',
enable: 'resetField'
},
-   'proxmoxtextfield[name=ring1_addr]': {
-   enable: 'ring1Needed'
-   },
'textfield': {
disable: 'resetField'
}
@@ -198,47 +181,70 @@ Ext.define('PVE.ClusterJoinNodeWindow', {
resetField: function(field) {
field.reset();
},
-   ring1Needed: function(f) {
-   var vm = this.getViewModel();
-   f.allowBlank = !vm.get('info.ring1Needed');
-   },
onInputTypeChange: function(field, assistedInput) {
-   var vm = this.getViewModel();
+   var view = this.getView();
+   var linkEditor = view.down('#linkEditor');
+
+   // this also clears all links
+   linkEditor.setAllowNumberEdit(!assistedInput);
+
if (!assistedInput) {
-   vm.set('info.ring1Possible', true);
+   linkEditor.setInfoText();
+   linkEditor.setDefaultLinks();
}
},
recomputeSerializedInfo: function(field, value) {
var vm = this.getViewModel();
+   var view = th

Re: [pve-devel] [PATCH v2 qemu-server 3/3] migrate: add live-migration of replicated disks

2020-03-17 Thread Stefan Reiter
Casually looked over the patches, looking good to me so far - I'll give 
them some testing later.


Two things inline.

On 17/03/2020 08:55, Fabian Grünbichler wrote:

with incremental drive-mirror and dirty-bitmap tracking.

1.) get replicated disks that are currently referenced by running VM
2.) add a block-dirty-bitmap to each of them
3.) replicate ALL replicated disks
4.) pass bitmaps from 2) to drive-mirror for disks from 1)
5.) skip replicated disks when cleaning up volumes on either source or
target

added error handling is just removing the bitmaps if an error occurs at
any point after 2, except when the handover to the target node has
already happened, since the bitmaps are cleaned up together with the
source VM in that case.

Signed-off-by: Fabian Grünbichler 
---

Notes:
 v1->v2:
 - rebased and fixed some conflicts

 for Qemu 4.2 you probably want the 'block-job-cancel instead of
 blockjob-complete' change recently discussed in response to Mira's UNIX
 migration series as well, otherwise shutting down the source VM will
 sometimes hang/need a forceful termination when using NBD.
 
 tested with single and multiple replicated disks, with and without (other)

 - local, replicated, unreferenced disks
 - local, non-replicated referenced disks
 - local, non-replicated unreferenced disks
 - shared disks
 
 and with switching targetstorage (obviously the replicated disks don't switch

 storage in that case, but the others do as before ;))
 
 I tested writes from within the VM during both 'hot' phases:

 - between adding the bitmap and replication
 - between replication and drive-mirror
 
 one thing to note is that since we add the bitmaps before making the

 replication (snapshots), they will contain some writes that have already 
been
 part of the last replication. AFAICT this should not be a problem, but we
 could also play it safer and do a
 
 - freeze

 - add bitmaps
 - start replication (which will unfreeze after taking the snapshots)
 
 sequence?


I also think the current way is safer, it really shouldn't be an issue 
to copy some stuff twice, and IMO the bigger "risk" here is something in 
our code (or QEMU's snapshot code) going wrong and the VM being stuck 
frozen (permanently or while something is timing out/waiting/whatever).


 
 the bitmap info is stored in the same hash like the other live local disk

 migration stuff, even though it comes from a different source and at a
 different time. I initially had it in a separate hash in $self->{bitmaps}, 
both
 variants are equally unelegant IMHO :P

  PVE/QemuMigrate.pm | 52 --
  PVE/QemuServer.pm  |  7 +++
  2 files changed, 57 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index d874760..7ae8246 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -444,8 +444,30 @@ sub sync_disks {
  
  	my $rep_cfg = PVE::ReplicationConfig->new();

if (my $jobcfg = $rep_cfg->find_local_replication_job($vmid, 
$self->{node})) {
-   die "can't live migrate VM with replicated volumes\n" if 
$self->{running};
+   if ($self->{running}) {
+   my $live_replicatable_volumes = {};
+   PVE::QemuServer::foreach_drive($conf, sub {
+   my ($ds, $drive) = @_;
+
+   my $volid = $drive->{file};
+   $live_replicatable_volumes->{$ds} = $volid
+   if defined($replicatable_volumes->{$volid});
+   });
+   foreach my $drive (keys %$live_replicatable_volumes) {
+   my $volid = $live_replicatable_volumes->{$drive};
+
+   my $bitmap = "repl_$drive";
+
+   # start tracking before replication to get full delta + a 
few duplicates
+   $self->log('info', "$drive: start tracking writes using 
block-dirty-bitmap '$bitmap'");
+   mon_cmd($vmid, 'block-dirty-bitmap-add', node => 
"drive-$drive", name => $bitmap);
+
+   # other info comes from target node in phase 2
+   $self->{target_drive}->{$drive}->{bitmap} = $bitmap;
+   }
+   }
$self->log('info', "replicating disk images");
+
my $start_time = time();
my $logfunc = sub { $self->log('info', shift) };
$self->{replicated_volumes} = PVE::Replication::run_replication(
@@ -500,6 +522,8 @@ sub cleanup_remotedisks {
  my ($self) = @_;
  
  foreach my $target_drive (keys %{$self->{target_drive}}) {

+   # don't clean up replicated disks!
+   next if defined($self->{target_drive}->{$target_drive}->{bitmap});
  
  	my $drive = PVE::QemuServer::parse_drive($target_drive, $self->{target_drive}->{$target_drive}->{drivestr});

my ($storeid, $volname) = 

[pve-devel] [PATCH v2 docs] Add documentation for virtio-rng

2020-03-10 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

Thanks for reminding me :)

v2 includes Aarons review as well as feedback I discussed with him off-list

 qm.adoc | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 0b699e2..cd6641f 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -791,6 +791,44 @@ device of the host use device passthrough (see
 xref:qm_pci_passthrough[PCI Passthrough] and
 xref:qm_usb_passthrough[USB Passthrough]).
 
+[[qm_virtio_rng]]
+VirtIO RNG
+~~
+
+A RNG (Random Number Generator) is a device providing entropy ('randomness') to
+a system. A virtual hardware-RNG can be used to provide such entropy from the
+host system to a guest VM. This helps to avoid entropy starvation problems in
+the guest (a situation where not enough entropy is available and the system may
+slow down or run into problems), especially during the guests boot process.
+
+To add a VirtIO-based emulated RNG, run the following command:
+
+
+qm set  -rng0 source=[,max_bytes=X,period=Y]
+
+
+`source` specifies where entropy is read from on the host and has to be one of
+the following:
+
+* `/dev/urandom`: Non-blocking kernel entropy pool (preferred)
+* `/dev/random`: Blocking kernel pool (not recommended, can lead to entropy
+  starvation on the host system)
+* `/dev/hwrng`: To pass through a hardware RNG attached to the host (if 
multiple
+  are available, the one selected in
+  `/sys/devices/virtual/misc/hw_random/rng_current` will be used)
+
+A limit can be specified via the `max_bytes` and `period` parameters, they are
+read as `max_bytes` per `period` in milliseconds. However, it does not 
represent
+a linear relationship: 1024B/1000ms would mean that up to 1 KiB of data becomes
+available on a 1 second timer, not that 1 KiB is streamed to the guest over the
+course of one second. Reducing the `period` can thus be used to inject entropy
+into the guest at a faster rate.
+
+By default, the limit is set to 1024 bytes per 1000 ms (1 KiB/s). It is
+recommended to always use a limiter to avoid guests using too many host
+resources. If desired, a value of '0' for `max_bytes` can be used to disable
+all limits.
+
 [[qm_startup_and_shutdown]]
 Automatic Start and Shutdown of Virtual Machines
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  1   2   3   4   5   >