[pve-devel] [PATCH docs] firmware: adapt to proxmox packaged fwupd

2024-04-19 Thread Stoiko Ivanov
We ship our own fwupd package, since it needs to handle the
differently named efi_os_dir (proxmox vs debian).
Due to our experience with `udisks2` causing issues on hypervisors,
our package downgraded the Recommends udisks2, to a Suggests.
The downside is, that users need to explicitly set their ESP
mountpoint in the config file.

Additionally a minor stylistic rephrasing (is an option vs. could be
an option).

Tested this today, while giving our fwupd package a spin.

Suggested-by: Fabian Grünbichler 
Signed-off-by: Stoiko Ivanov 
---
 firmware-updates.adoc | 21 +++--
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/firmware-updates.adoc b/firmware-updates.adoc
index 279cf11..8354955 100644
--- a/firmware-updates.adoc
+++ b/firmware-updates.adoc
@@ -43,13 +43,22 @@ ConnectX or
 
https://techdocs.broadcom.com/us/en/storage-and-ethernet-connectivity/ethernet-nic-controllers/bcm957xxx/adapters/software-installation/updating-the-firmware/manually-updating-the-adapter-firmware-on-linuxesx.html['bnxtnvm'/'niccli']
 for Broadcom network cards.
 
-* https://fwupd.org[LVFS] could also be an option if there is a cooperation 
with
-a https://fwupd.org/lvfs/vendors/[vendor] and
+* https://fwupd.org[LVFS] is also an option if there is a cooperation with
+the https://fwupd.org/lvfs/vendors/[hardware vendor] and
 https://fwupd.org/lvfs/devices/[supported hardware] in use. The technical
-requirement for this is that the system was manufactured after 2014, is booted
-via UEFI and the easiest way is to mount the EFI partition from which you boot
-(`mount /dev/disk/by-partuuid/ /boot/efi`) before 
installing
-'fwupd'.
+requirement for this is that the system was manufactured after 2014 and is
+booted via UEFI.
+
+Since {pve} ships its own version of the `fwupd` package, for Secure Boot
+Support with the Proxmox signing key, which does not recommend the `udisks2`
+package, due to observed issues with its use on hypervisors setting the mount
+point of the EFI partition in `/etc/fwupd/daemon.conf` is necessary:
+
+.File `/etc/fwupd/daemon.conf`
+
+# Override the location used for the EFI system partition (ESP) path.
+EspLocation=/boot/efi
+
 
 TIP: If the update instructions require a host reboot, make sure that it can be
 done safely. See also xref:ha_manager_node_maintenance[Node Maintenance].
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH proxmox-firewall 1/2] firewall: wait for nft process

2024-04-19 Thread Thomas Lamprecht
Am 19/04/2024 um 15:00 schrieb Stefan Hanreich:
> NftClient never waits for the child process to terminate leading to
> defunct leftover processes.
> 
> Signed-off-by: Stefan Hanreich 
> ---
>  proxmox-nftables/src/client.rs | 38 --
>  1 file changed, 9 insertions(+), 29 deletions(-)
> 
>

applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager] pveversion: fix whitespaces

2024-04-19 Thread Thomas Lamprecht
Am 19/04/2024 um 18:33 schrieb Alexander Zeidler:
> Signed-off-by: Alexander Zeidler 
> ---
>  bin/pveversion | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] pveversion: fix whitespaces

2024-04-19 Thread Alexander Zeidler
Signed-off-by: Alexander Zeidler 
---
 bin/pveversion | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/bin/pveversion b/bin/pveversion
index a98d2c66..d32a6a0c 100755
--- a/bin/pveversion
+++ b/bin/pveversion
@@ -60,7 +60,7 @@ my $opt_verbose;
 if (!GetOptions ('verbose' => \$opt_verbose)) {
 print_usage ();
 exit (-1);
-} 
+}
 
 if (scalar (@ARGV) != 0) {
 print_usage ();
@@ -86,7 +86,7 @@ __END__
 
 =head1 NAME
 
-pveversion - Proxmox  VE version info
+pveversion - Proxmox VE version info
 
 =head1 SYNOPSIS
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 manager] pve7to8: reword and fix typos in description

2024-04-19 Thread Alexander Zeidler
Signed-off-by: Alexander Zeidler 
---
v2:
* incorporate all feedback from v1

v1: https://lists.proxmox.com/pipermail/pve-devel/2024-April/063252.html


 bin/Makefile | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/bin/Makefile b/bin/Makefile
index 180a91b5..6c5f9b14 100644
--- a/bin/Makefile
+++ b/bin/Makefile
@@ -66,10 +66,10 @@ pve6to7.1:
 
 pve7to8.1:
printf ".TH PVE7TO8 1\n.SH NAME\npve7to8 \- Proxmox VE upgrade checker 
script for 7.4+ to current 8.x\n" > $@.tmp
-   printf ".SH DESCRIPTION\nThis tool will help you to detect common 
pitfalls and misconfguration\
-before, and during the upgrade of a Proxmox VE system\n" >> $@.tmp
-   printf "Any failure must be addressed before the upgrade, and any 
waring must be addressed, \
-or at least carefully evaluated, if a false-positive is suspected\n" 
>> $@.tmp
+   printf ".SH DESCRIPTION\nThis tool will help you detect common pitfalls 
and misconfigurations\
+before and during the upgrade of a Proxmox VE system.\n" >> $@.tmp
+   printf "Any failures or warnings must be addressed prior to the 
upgrade.\n" >> $@.tmp
+   printf "If you think that a message is a false-positive for your setup, 
double-check that before proceeding.\n" >> $@.tmp
printf ".SH SYNOPSIS\npve7to8 [--full]\n" >> $@.tmp
mv $@.tmp $@
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 2/2] report: add recent boot timestamps which may show fencing/crash events

2024-04-19 Thread Alexander Zeidler
Successful boots which crashed somehow and sometime afterwards, will
show the same "until" value ("still running" or timestamp) as the next
following boot(s). The most recent boot from such a sequence of
duplicated "until" lines, has not been crashed or not yet.

Example output where only the boot from 16:25:41 crashed:
 reboot system boot 6.5.11-7-pve Thu Apr 11 16:31:24 2024 still running
 reboot system boot 6.5.11-7-pve Thu Apr 11 16:29:17 2024 - Thu Apr 11 16:31:12 
2024 (00:01)
 reboot system boot 6.5.11-7-pve Thu Apr 11 16:25:41 2024 - Thu Apr 11 16:31:12 
2024 (00:05)
 ...

Furthermore, it shows the booted/crashed/problematic kernel version.

`last` is also used since currently `journalctl --list-boots` can take
10 seconds or even longer on some systems, with no option to limit the
amount of reported boot lines.

Signed-off-by: Alexander Zeidler 
---
v3:
* place the cmd after pveversion due to priority reasons

v2: https://lists.proxmox.com/pipermail/pve-devel/2024-April/063286.html
* move away from dmesg base
* list also recent (5) boot timestamps with additional information

v1: https://lists.proxmox.com/pipermail/pve-devel/2024-March/062342.html


 PVE/Report.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/Report.pm b/PVE/Report.pm
index 73f14744..9a8eaa4b 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -33,6 +33,7 @@ my $init_report_cmds = sub {
'date -R',
'cat /proc/cmdline',
'pveversion --verbose',
+   'last reboot -F -n5',
'cat /etc/hosts',
'pvesubscription get',
'cat /etc/apt/sources.list',
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v3 manager 1/2] report: overhaul `dmidecode` related output

2024-04-19 Thread Alexander Zeidler
While using keywords (-t bios,...) would be possible, depending on the
server it also bloats the report with uninteresting information,
hiding the relevant. Therefore the non-grouped, specific number types
are used.

Where we only need specific information, not serial numbers
etc., we print the information from /sys/... which is the same source
that dmidecode uses per default.

New output includes:

 # cat /sys/devices/virtual/dmi/id/sys_vendor
 HP

 # cat /sys/devices/virtual/dmi/id/product_name
 ProLiant·DL380p·Gen8

 # cat /sys/devices/virtual/dmi/id/product_version
 Not·specified

 # cat /sys/devices/virtual/dmi/id/board_vendor
 ASUSTeK·COMPUTER·INC.

 # cat /sys/devices/virtual/dmi/id/board_name
 Z13PP-D32·Series

 # cat /sys/devices/virtual/dmi/id/board_version
 60SB09M0-SB0G11

`-t 0`:
 (like the previous "BIOS" output, but without "BIOS Language" block)

`-t 3`:
 Chassis Information
Manufacturer: HP
Type: Rack Mount Chassis
Boot-up State: Critical
Power Supply State: Critical
Thermal State: Safe
Number Of Power Cords: 2
(...)

and

`-t 32`:
System Boot Information
Status: Firmware-detected hardware failure

which can hint to Proxmox-independant issues, debug-able via IPMI.

Signed-off-by: Alexander Zeidler 
---
v3:
* as discussed also off-list, switch to `cat` for board and product
  output to avoid complexity

v2: https://lists.proxmox.com/pipermail/pve-devel/2024-April/063284.html
* reformat and extend board output
* add product output
* adapt dmidecode output

v1: https://lists.proxmox.com/pipermail/pve-devel/2024-March/062350.html



 PVE/Report.pm | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/PVE/Report.pm b/PVE/Report.pm
index 1ed91c8e..73f14744 100644
--- a/PVE/Report.pm
+++ b/PVE/Report.pm
@@ -110,7 +110,13 @@ my $init_report_cmds = sub {
hardware => {
order => 70,
cmds => [
-   'dmidecode -t bios',
+   'cat /sys/devices/virtual/dmi/id/sys_vendor',
+   'cat /sys/devices/virtual/dmi/id/product_name',
+   'cat /sys/devices/virtual/dmi/id/product_version',
+   'cat /sys/devices/virtual/dmi/id/board_vendor',
+   'cat /sys/devices/virtual/dmi/id/board_name',
+   'cat /sys/devices/virtual/dmi/id/board_version',
+   'dmidecode -t 0,3,32',
'lspci -nnk',
],
},
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] partially-applied: [PATCH-SERIES v3] fix #4136: implement backup fleecing

2024-04-19 Thread Fiona Ebner
As discussed off-list with Thomas, applied the remaining non-RFC patches
and bumped the packages with the necessary dependency bumps. For the RFC
ones we'll have more time to decide upon the improved cleanup mechanism
(should it even go to the config or rather some file in /var) and minus
sign in config options.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 01/20] notify: switch to file-based templating system

2024-04-19 Thread Lukas Wagner
Instead of passing the template strings for subject and body when
constructing a notification, we pass only the name of a template.
When rendering the template, the name of the template is used to find
corresponding template files. For PVE, they are located at
/usr/share/proxmox-ve/templates/default. The `default` part is
the 'template namespace', which is a preparation for user-customizable
and/or translatable notifications.

Previously, the same template string was used to render HTML and
plaintext notifications. This was achieved by providing some template
helpers that 'abstract away' HTML/plaintext formatting. However,
in hindsight this turned out to be pretty finicky. Since the
current changes lay the foundations for user-customizable notification
templates, I ripped these abstractions out. Now there are simply two
templates, one for plaintext, one for HTML.

Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/examples/render.rs|  63 
 proxmox-notify/src/context/mod.rs|  10 +-
 proxmox-notify/src/context/pbs.rs|  16 ++
 proxmox-notify/src/context/pve.rs|  15 ++
 proxmox-notify/src/context/test.rs   |   9 ++
 proxmox-notify/src/endpoints/gotify.rs   |   9 +-
 proxmox-notify/src/endpoints/sendmail.rs |  13 +-
 proxmox-notify/src/endpoints/smtp.rs |  11 +-
 proxmox-notify/src/lib.rs|  27 ++--
 proxmox-notify/src/matcher.rs|  24 +--
 proxmox-notify/src/renderer/html.rs  |  14 --
 proxmox-notify/src/renderer/mod.rs   | 197 +++
 proxmox-notify/src/renderer/plaintext.rs |  39 -
 13 files changed, 137 insertions(+), 310 deletions(-)
 delete mode 100644 proxmox-notify/examples/render.rs

diff --git a/proxmox-notify/examples/render.rs 
b/proxmox-notify/examples/render.rs
deleted file mode 100644
index d705fd0..000
--- a/proxmox-notify/examples/render.rs
+++ /dev/null
@@ -1,63 +0,0 @@
-use proxmox_notify::renderer::{render_template, TemplateRenderer};
-use proxmox_notify::Error;
-
-use serde_json::json;
-
-const TEMPLATE:  = r#"
-{{ heading-1 "Backup Report"}}
-A backup job on host {{host}} was run.
-
-{{ heading-2 "Guests"}}
-{{ table table }}
-The total size of all backups is {{human-bytes total-size}}.
-
-The backup job took {{duration total-time}}.
-
-{{ heading-2 "Logs"}}
-{{ verbatim-monospaced logs}}
-
-{{ heading-2 "Objects"}}
-{{ object table }}
-"#;
-
-fn main() -> Result<(), Error> {
-let properties = json!({
-"host": "pali",
-"logs": "100: starting backup\n100: backup failed",
-"total-size": 1024 * 1024 + 2048 * 1024,
-"total-time": 100,
-"table": {
-"schema": {
-"columns": [
-{
-"label": "VMID",
-"id": "vmid"
-},
-{
-"label": "Size",
-"id": "size",
-"renderer": "human-bytes"
-}
-],
-},
-"data" : [
-{
-"vmid": 1001,
-"size": "1048576"
-},
-{
-"vmid": 1002,
-"size": 2048 * 1024,
-}
-]
-}
-});
-
-let output = render_template(TemplateRenderer::Html, TEMPLATE, 
)?;
-println!("{output}");
-
-let output = render_template(TemplateRenderer::Plaintext, TEMPLATE, 
)?;
-println!("{output}");
-
-Ok(())
-}
diff --git a/proxmox-notify/src/context/mod.rs 
b/proxmox-notify/src/context/mod.rs
index cc68603..c0a5a13 100644
--- a/proxmox-notify/src/context/mod.rs
+++ b/proxmox-notify/src/context/mod.rs
@@ -1,6 +1,8 @@
 use std::fmt::Debug;
 use std::sync::Mutex;
 
+use crate::Error;
+
 #[cfg(any(feature = "pve-context", feature = "pbs-context"))]
 pub mod common;
 #[cfg(feature = "pbs-context")]
@@ -20,8 +22,14 @@ pub trait Context: Send + Sync + Debug {
 fn default_sendmail_from() -> String;
 /// Proxy configuration for the current node
 fn http_proxy_config() -> Option;
-// Return default config for built-in targets/matchers.
+/// Return default config for built-in targets/matchers.
 fn default_config() -> &'static str;
+/// Lookup a template in a certain (optional) namespace
+fn lookup_template(
+,
+filename: ,
+namespace: Option<>,
+) -> Result, Error>;
 }
 
 #[cfg(not(test))]
diff --git a/proxmox-notify/src/context/pbs.rs 
b/proxmox-notify/src/context/pbs.rs
index 5b97af7..70e993f 100644
--- a/proxmox-notify/src/context/pbs.rs
+++ b/proxmox-notify/src/context/pbs.rs
@@ -1,9 +1,11 @@
 use serde::Deserialize;
+use std::path::Path;
 
 use proxmox_schema::{ObjectSchema, Schema, StringSchema};
 use proxmox_section_config::{SectionConfig, SectionConfigPlugin};
 
 use crate::context::{common, Context};
+use 

[pve-devel] [PATCH proxmox v2 02/20] notify: make api methods take config struct ownership

2024-04-19 Thread Lukas Wagner
This saves us from some of the awkward cloning steps when updating.

Suggested-by: Wolfgang Bumiller 
Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/api/gotify.rs   | 46 +-
 proxmox-notify/src/api/matcher.rs  | 38 +++
 proxmox-notify/src/api/mod.rs  | 14 +++---
 proxmox-notify/src/api/sendmail.rs | 40 +++
 proxmox-notify/src/api/smtp.rs | 78 +++---
 5 files changed, 108 insertions(+), 108 deletions(-)

diff --git a/proxmox-notify/src/api/gotify.rs b/proxmox-notify/src/api/gotify.rs
index a93a024..15d94cb 100644
--- a/proxmox-notify/src/api/gotify.rs
+++ b/proxmox-notify/src/api/gotify.rs
@@ -41,8 +41,8 @@ pub fn get_endpoint(config: , name: ) -> 
Result Result<(), HttpError> {
 if endpoint_config.name != private_endpoint_config.name {
 // Programming error by the user of the crate, thus we panic
@@ -51,11 +51,11 @@ pub fn add_endpoint(
 
 super::ensure_unique(config, _config.name)?;
 
-set_private_config_entry(config, private_endpoint_config)?;
+set_private_config_entry(config, _endpoint_config)?;
 
 config
 .config
-.set_data(_config.name, GOTIFY_TYPENAME, endpoint_config)
+.set_data(_config.name, GOTIFY_TYPENAME, _config)
 .map_err(|e| {
 http_err!(
 INTERNAL_SERVER_ERROR,
@@ -75,8 +75,8 @@ pub fn add_endpoint(
 pub fn update_endpoint(
 config:  Config,
 name: ,
-endpoint_config_updater: ,
-private_endpoint_config_updater: ,
+endpoint_config_updater: GotifyConfigUpdater,
+private_endpoint_config_updater: GotifyPrivateConfigUpdater,
 delete: Option<&[DeleteableGotifyProperty]>,
 digest: Option<&[u8]>,
 ) -> Result<(), HttpError> {
@@ -93,11 +93,11 @@ pub fn update_endpoint(
 }
 }
 
-if let Some(server) = _config_updater.server {
-endpoint.server = server.into();
+if let Some(server) = endpoint_config_updater.server {
+endpoint.server = server;
 }
 
-if let Some(token) = _endpoint_config_updater.token {
+if let Some(token) = private_endpoint_config_updater.token {
 set_private_config_entry(
 config,
  {
@@ -107,12 +107,12 @@ pub fn update_endpoint(
 )?;
 }
 
-if let Some(comment) = _config_updater.comment {
-endpoint.comment = Some(comment.into());
+if let Some(comment) = endpoint_config_updater.comment {
+endpoint.comment = Some(comment)
 }
 
-if let Some(disable) = _config_updater.disable {
-endpoint.disable = Some(*disable);
+if let Some(disable) = endpoint_config_updater.disable {
+endpoint.disable = Some(disable);
 }
 
 config
@@ -173,13 +173,13 @@ mod tests {
 pub fn add_default_gotify_endpoint(config:  Config) -> Result<(), 
HttpError> {
 add_endpoint(
 config,
- {
+GotifyConfig {
 name: "gotify-endpoint".into(),
 server: "localhost".into(),
 comment: Some("comment".into()),
 ..Default::default()
 },
- {
+GotifyPrivateConfig {
 name: "gotify-endpoint".into(),
 token: "supersecrettoken".into(),
 },
@@ -196,8 +196,8 @@ mod tests {
 assert!(update_endpoint(
  config,
 "test",
-::default(),
-::default(),
+Default::default(),
+Default::default(),
 None,
 None
 )
@@ -214,8 +214,8 @@ mod tests {
 assert!(update_endpoint(
  config,
 "gotify-endpoint",
-::default(),
-::default(),
+Default::default(),
+Default::default(),
 None,
 Some(&[0; 32])
 )
@@ -234,12 +234,12 @@ mod tests {
 update_endpoint(
  config,
 "gotify-endpoint",
- {
+GotifyConfigUpdater {
 server: Some("newhost".into()),
 comment: Some("newcomment".into()),
 ..Default::default()
 },
- {
+GotifyPrivateConfigUpdater {
 token: Some("changedtoken".into()),
 },
 None,
@@ -263,8 +263,8 @@ mod tests {
 update_endpoint(
  config,
 "gotify-endpoint",
-::default(),
-::default(),
+Default::default(),
+Default::default(),
 Some(&[DeleteableGotifyProperty::Comment]),
 None,
 )?;
diff --git a/proxmox-notify/src/api/matcher.rs 
b/proxmox-notify/src/api/matcher.rs
index ca01bc9..f0eabd9 100644
--- a/proxmox-notify/src/api/matcher.rs
+++ b/proxmox-notify/src/api/matcher.rs
@@ -36,7 +36,7 @@ pub fn get_matcher(config: , name: ) -> 
Result 
Result<(), HttpError> {
+pub fn 

[pve-devel] [PATCH proxmox v2 03/20] notify: convert Option> -> Vec in config structs

2024-04-19 Thread Lukas Wagner
Suggested-by: Wolfgang Bumiller 
Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/api/matcher.rs   | 27 +++
 proxmox-notify/src/api/mod.rs   | 22 +---
 proxmox-notify/src/api/sendmail.rs  | 22 ++--
 proxmox-notify/src/api/smtp.rs  | 24 ++---
 proxmox-notify/src/endpoints/common/mail.rs | 20 ---
 proxmox-notify/src/endpoints/sendmail.rs| 14 
 proxmox-notify/src/endpoints/smtp.rs| 18 +-
 proxmox-notify/src/lib.rs   | 10 +++---
 proxmox-notify/src/matcher.rs   | 38 -
 9 files changed, 96 insertions(+), 99 deletions(-)

diff --git a/proxmox-notify/src/api/matcher.rs 
b/proxmox-notify/src/api/matcher.rs
index f0eabd9..63ec73d 100644
--- a/proxmox-notify/src/api/matcher.rs
+++ b/proxmox-notify/src/api/matcher.rs
@@ -38,10 +38,7 @@ pub fn get_matcher(config: , name: ) -> 
Result 
Result<(), HttpError> {
 super::ensure_unique(config, _config.name)?;
-
-if let Some(targets) = matcher_config.target.as_deref() {
-super::ensure_endpoints_exist(config, targets)?;
-}
+super::ensure_endpoints_exist(config, _config.target)?;
 
 config
 .config
@@ -78,10 +75,10 @@ pub fn update_matcher(
 if let Some(delete) = delete {
 for deleteable_property in delete {
 match deleteable_property {
-DeleteableMatcherProperty::MatchSeverity => 
matcher.match_severity = None,
-DeleteableMatcherProperty::MatchField => matcher.match_field = 
None,
-DeleteableMatcherProperty::MatchCalendar => 
matcher.match_calendar = None,
-DeleteableMatcherProperty::Target => matcher.target = None,
+DeleteableMatcherProperty::MatchSeverity => 
matcher.match_severity.clear(),
+DeleteableMatcherProperty::MatchField => 
matcher.match_field.clear(),
+DeleteableMatcherProperty::MatchCalendar => 
matcher.match_calendar.clear(),
+DeleteableMatcherProperty::Target => matcher.target.clear(),
 DeleteableMatcherProperty::Mode => matcher.mode = None,
 DeleteableMatcherProperty::InvertMatch => matcher.invert_match 
= None,
 DeleteableMatcherProperty::Comment => matcher.comment = None,
@@ -91,15 +88,15 @@ pub fn update_matcher(
 }
 
 if let Some(match_severity) = matcher_updater.match_severity {
-matcher.match_severity = Some(match_severity);
+matcher.match_severity = match_severity;
 }
 
 if let Some(match_field) = matcher_updater.match_field {
-matcher.match_field = Some(match_field);
+matcher.match_field = match_field;
 }
 
 if let Some(match_calendar) = matcher_updater.match_calendar {
-matcher.match_calendar = Some(match_calendar);
+matcher.match_calendar = match_calendar;
 }
 
 if let Some(mode) = matcher_updater.mode {
@@ -120,7 +117,7 @@ pub fn update_matcher(
 
 if let Some(target) = matcher_updater.target {
 super::ensure_endpoints_exist(config, target.as_slice())?;
-matcher.target = Some(target);
+matcher.target = target;
 }
 
 config
@@ -244,9 +241,9 @@ matcher: matcher2
 let matcher = get_matcher(, "matcher1")?;
 
 assert_eq!(matcher.invert_match, None);
-assert!(matcher.match_severity.is_none());
-assert!(matches!(matcher.match_field, None));
-assert_eq!(matcher.target, None);
+assert!(matcher.match_severity.is_empty());
+assert!(matcher.match_field.is_empty());
+assert!(matcher.target.is_empty());
 assert!(matcher.mode.is_none());
 assert_eq!(matcher.comment, None);
 
diff --git a/proxmox-notify/src/api/mod.rs b/proxmox-notify/src/api/mod.rs
index bb0371d..a2cf29e 100644
--- a/proxmox-notify/src/api/mod.rs
+++ b/proxmox-notify/src/api/mod.rs
@@ -102,10 +102,8 @@ fn get_referrers(config: , entity: ) -> 
Result, HttpE
 let mut referrers = HashSet::new();
 
 for matcher in matcher::get_matchers(config)? {
-if let Some(targets) = matcher.target {
-if targets.iter().any(|target| target == entity) {
-referrers.insert(matcher.name.clone());
-}
+if matcher.target.iter().any(|target| target == entity) {
+referrers.insert(matcher.name.clone());
 }
 }
 
@@ -149,11 +147,9 @@ fn get_referenced_entities(config: , entity: ) 
-> HashSet {
 let mut new = HashSet::new();
 
 for entity in entities {
-if let Ok(group) = matcher::get_matcher(config, entity) {
-if let Some(targets) = group.target {
-for target in targets {
-new.insert(target.clone());
-}
+if let Ok(matcher) = matcher::get_matcher(config, entity) {
+for 

[pve-devel] [PATCH proxmox-perl-rs v2 14/20] notify: don't pass config structs by reference

2024-04-19 Thread Lukas Wagner
proxmox_notify's api functions have been changed so that they take
ownership of config structs.

Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 common/src/notify.rs | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/common/src/notify.rs b/common/src/notify.rs
index d965417..00a6056 100644
--- a/common/src/notify.rs
+++ b/common/src/notify.rs
@@ -151,7 +151,7 @@ mod export {
 
 api::sendmail::add_endpoint(
  config,
- {
+SendmailConfig {
 name,
 mailto,
 mailto_user,
@@ -185,7 +185,7 @@ mod export {
 api::sendmail::update_endpoint(
  config,
 name,
- {
+SendmailConfigUpdater {
 mailto,
 mailto_user,
 from_address,
@@ -236,7 +236,7 @@ mod export {
 let mut config = this.config.lock().unwrap();
 api::gotify::add_endpoint(
  config,
- {
+GotifyConfig {
 name: name.clone(),
 server,
 comment,
@@ -244,7 +244,7 @@ mod export {
 filter: None,
 origin: None,
 },
- { name, token },
+GotifyPrivateConfig { name, token },
 )
 }
 
@@ -266,12 +266,12 @@ mod export {
 api::gotify::update_endpoint(
  config,
 name,
- {
+GotifyConfigUpdater {
 server,
 comment,
 disable,
 },
- { token },
+GotifyPrivateConfigUpdater { token },
 delete.as_deref(),
 digest.as_deref(),
 )
@@ -323,7 +323,7 @@ mod export {
 let mut config = this.config.lock().unwrap();
 api::smtp::add_endpoint(
  config,
- {
+SmtpConfig {
 name: name.clone(),
 server,
 port,
@@ -337,7 +337,7 @@ mod export {
 disable,
 origin: None,
 },
- { name, password },
+SmtpPrivateConfig { name, password },
 )
 }
 
@@ -366,7 +366,7 @@ mod export {
 api::smtp::update_endpoint(
  config,
 name,
- {
+SmtpConfigUpdater {
 server,
 port,
 mode,
@@ -378,7 +378,7 @@ mod export {
 comment,
 disable,
 },
- { password },
+SmtpPrivateConfigUpdater { password },
 delete.as_deref(),
 digest.as_deref(),
 )
@@ -427,7 +427,7 @@ mod export {
 let mut config = this.config.lock().unwrap();
 api::matcher::add_matcher(
  config,
- {
+MatcherConfig {
 name,
 match_severity,
 match_field,
@@ -464,7 +464,7 @@ mod export {
 api::matcher::update_matcher(
  config,
 name,
- {
+MatcherConfigUpdater {
 match_severity,
 match_field,
 match_calendar,
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 09/20] notify: derive `api` for Deleteable*Property

2024-04-19 Thread Lukas Wagner
The API endpoints in Proxmox Backup Server require ApiType to be
implemented for any deserialized parameter.

Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/endpoints/gotify.rs   | 3 +++
 proxmox-notify/src/endpoints/sendmail.rs | 7 +++
 proxmox-notify/src/endpoints/smtp.rs | 9 +
 proxmox-notify/src/matcher.rs| 9 +
 4 files changed, 28 insertions(+)

diff --git a/proxmox-notify/src/endpoints/gotify.rs 
b/proxmox-notify/src/endpoints/gotify.rs
index afdfabc..ee8ca51 100644
--- a/proxmox-notify/src/endpoints/gotify.rs
+++ b/proxmox-notify/src/endpoints/gotify.rs
@@ -81,10 +81,13 @@ pub struct GotifyEndpoint {
 pub private_config: GotifyPrivateConfig,
 }
 
+#[api]
 #[derive(Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
 pub enum DeleteableGotifyProperty {
+/// Delete `comment`
 Comment,
+/// Delete `disable`
 Disable,
 }
 
diff --git a/proxmox-notify/src/endpoints/sendmail.rs 
b/proxmox-notify/src/endpoints/sendmail.rs
index fa04002..47901ef 100644
--- a/proxmox-notify/src/endpoints/sendmail.rs
+++ b/proxmox-notify/src/endpoints/sendmail.rs
@@ -73,14 +73,21 @@ pub struct SendmailConfig {
 pub origin: Option,
 }
 
+#[api]
 #[derive(Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
 pub enum DeleteableSendmailProperty {
+/// Delete `author`
 Author,
+/// Delete `comment`
 Comment,
+/// Delete `disable`
 Disable,
+/// Delete `from-address`
 FromAddress,
+/// Delete `mailto`
 Mailto,
+/// Delete `mailto-user`
 MailtoUser,
 }
 
diff --git a/proxmox-notify/src/endpoints/smtp.rs 
b/proxmox-notify/src/endpoints/smtp.rs
index ded5baf..f04583a 100644
--- a/proxmox-notify/src/endpoints/smtp.rs
+++ b/proxmox-notify/src/endpoints/smtp.rs
@@ -104,16 +104,25 @@ pub struct SmtpConfig {
 pub origin: Option,
 }
 
+#[api]
 #[derive(Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
 pub enum DeleteableSmtpProperty {
+/// Delete `author`
 Author,
+/// Delete `comment`
 Comment,
+/// Delete `disable`
 Disable,
+/// Delete `mailto`
 Mailto,
+/// Delete `mailto-user`
 MailtoUser,
+/// Delete `password`
 Password,
+/// Delete `port`
 Port,
+/// Delete `username`
 Username,
 }
 
diff --git a/proxmox-notify/src/matcher.rs b/proxmox-notify/src/matcher.rs
index b387fef..2d30378 100644
--- a/proxmox-notify/src/matcher.rs
+++ b/proxmox-notify/src/matcher.rs
@@ -412,16 +412,25 @@ impl FromStr for CalendarMatcher {
 }
 }
 
+#[api]
 #[derive(Serialize, Deserialize)]
 #[serde(rename_all = "kebab-case")]
 pub enum DeleteableMatcherProperty {
+/// Delete `comment`
 Comment,
+/// Delete `disable`
 Disable,
+/// Delete `invert-match`
 InvertMatch,
+/// Delete `match-calendar`
 MatchCalendar,
+/// Delete `match-field`
 MatchField,
+/// Delete `match-severity`
 MatchSeverity,
+/// Delete `mode`
 Mode,
+/// Delete `target`
 Target,
 }
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-perl-rs v2 13/20] notify: use file based notification templates

2024-04-19 Thread Lukas Wagner
Instead of passing literal template strings to the notification
system, we now only pass an identifier. This identifier will be used
load the template files from a product-specific directory.

Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 common/src/notify.rs | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/common/src/notify.rs b/common/src/notify.rs
index 8f9f38f..d965417 100644
--- a/common/src/notify.rs
+++ b/common/src/notify.rs
@@ -94,16 +94,14 @@ mod export {
 fn send(
 #[try_from_ref] this: ,
 severity: Severity,
-title: String,
-body: String,
+template_name: String,
 template_data: Option,
 fields: Option>,
 ) -> Result<(), HttpError> {
 let config = this.config.lock().unwrap();
-let notification = Notification::new_templated(
+let notification = Notification::from_template(
 severity,
-title,
-body,
+template_name,
 template_data.unwrap_or_default(),
 fields.unwrap_or_default(),
 );
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-perl-rs v2 15/20] notify: adapt to Option> to Vec changes in proxmox_notify

2024-04-19 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 common/src/notify.rs | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/common/src/notify.rs b/common/src/notify.rs
index 00a6056..e1b006b 100644
--- a/common/src/notify.rs
+++ b/common/src/notify.rs
@@ -153,8 +153,8 @@ mod export {
  config,
 SendmailConfig {
 name,
-mailto,
-mailto_user,
+mailto: mailto.unwrap_or_default(),
+mailto_user: mailto_user.unwrap_or_default(),
 from_address,
 author,
 comment,
@@ -329,8 +329,8 @@ mod export {
 port,
 mode,
 username,
-mailto,
-mailto_user,
+mailto: mailto.unwrap_or_default(),
+mailto_user: mailto_user.unwrap_or_default(),
 from_address,
 author,
 comment,
@@ -429,10 +429,10 @@ mod export {
  config,
 MatcherConfig {
 name,
-match_severity,
-match_field,
-match_calendar,
-target,
+match_severity: match_severity.unwrap_or_default(),
+match_field: match_field.unwrap_or_default(),
+match_calendar: match_calendar.unwrap_or_default(),
+target: target.unwrap_or_default(),
 mode,
 invert_match,
 comment,
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 10/20] notify: derive Deserialize/Serialize for Notification struct

2024-04-19 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/lib.rs | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/proxmox-notify/src/lib.rs b/proxmox-notify/src/lib.rs
index 91c0b61..292396b 100644
--- a/proxmox-notify/src/lib.rs
+++ b/proxmox-notify/src/lib.rs
@@ -159,9 +159,11 @@ pub trait Endpoint {
 fn disabled() -> bool;
 }
 
-#[derive(Debug, Clone)]
+#[derive(Debug, Clone, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
 pub enum Content {
 /// Title and body will be rendered as a template
+#[serde(rename_all = "kebab-case")]
 Template {
 /// Name of the used template
 template_name: String,
@@ -169,6 +171,7 @@ pub enum Content {
 data: Value,
 },
 #[cfg(feature = "mail-forwarder")]
+#[serde(rename_all = "kebab-case")]
 ForwardedMail {
 /// Raw mail contents
 raw: Vec,
@@ -182,7 +185,8 @@ pub enum Content {
 },
 }
 
-#[derive(Debug, Clone)]
+#[derive(Debug, Clone, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
 pub struct Metadata {
 /// Notification severity
 severity: Severity,
@@ -192,7 +196,8 @@ pub struct Metadata {
 additional_fields: HashMap,
 }
 
-#[derive(Debug, Clone)]
+#[derive(Debug, Clone, Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
 /// Notification which can be sent
 pub struct Notification {
 /// Notification content
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 11/20] notify: pbs context: include nodename in default sendmail author

2024-04-19 Thread Lukas Wagner
The old notification stack in proxmox-backup includes the nodename, so
we include it here as well.

Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/context/pbs.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/proxmox-notify/src/context/pbs.rs 
b/proxmox-notify/src/context/pbs.rs
index 70e993f..299f685 100644
--- a/proxmox-notify/src/context/pbs.rs
+++ b/proxmox-notify/src/context/pbs.rs
@@ -82,7 +82,7 @@ impl Context for PBSContext {
 }
 
 fn default_sendmail_author() -> String {
-"Proxmox Backup Server".into()
+format!("Proxmox Backup Server - {}", proxmox_sys::nodename())
 }
 
 fn default_sendmail_from() -> String {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 08/20] notify: api: add get_targets

2024-04-19 Thread Lukas Wagner
This method allows us to get a list of all notification targets.

Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/api/mod.rs | 77 +++
 1 file changed, 77 insertions(+)

diff --git a/proxmox-notify/src/api/mod.rs b/proxmox-notify/src/api/mod.rs
index a2cf29e..a7f6261 100644
--- a/proxmox-notify/src/api/mod.rs
+++ b/proxmox-notify/src/api/mod.rs
@@ -3,6 +3,7 @@ use std::collections::HashSet;
 use serde::{Deserialize, Serialize};
 
 use proxmox_http_error::HttpError;
+use proxmox_schema::api;
 
 use crate::{Config, Origin};
 
@@ -39,6 +40,82 @@ macro_rules! http_bail {
 pub use http_bail;
 pub use http_err;
 
+#[api]
+#[derive(Clone, Debug, Copy, Serialize, Deserialize, PartialEq, Eq, 
PartialOrd)]
+#[serde(rename_all = "kebab-case")]
+/// Type of the endpoint.
+pub enum EndpointType {
+/// Sendmail endpoint
+#[cfg(feature = "sendmail")]
+Sendmail,
+/// SMTP endpoint
+#[cfg(feature = "smtp")]
+Smtp,
+/// Gotify endpoint
+#[cfg(feature = "gotify")]
+Gotify,
+}
+
+#[api]
+#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd)]
+#[serde(rename_all = "kebab-case")]
+/// Target information
+pub struct Target {
+/// Name of the endpoint
+name: String,
+/// Origin of the endpoint
+origin: Origin,
+/// Type of the endpoint
+#[serde(rename = "type")]
+endpoint_type: EndpointType,
+/// Target is disabled
+#[serde(skip_serializing_if = "Option::is_none")]
+disable: Option,
+/// Comment
+#[serde(skip_serializing_if = "Option::is_none")]
+comment: Option,
+}
+
+/// Get a list of all notification targets.
+pub fn get_targets(config: ) -> Result, HttpError> {
+let mut targets = Vec::new();
+
+#[cfg(feature = "gotify")]
+for endpoint in gotify::get_endpoints(config)? {
+targets.push(Target {
+name: endpoint.name,
+origin: endpoint.origin.unwrap_or(Origin::UserCreated),
+endpoint_type: EndpointType::Gotify,
+disable: endpoint.disable,
+comment: endpoint.comment,
+})
+}
+
+#[cfg(feature = "sendmail")]
+for endpoint in sendmail::get_endpoints(config)? {
+targets.push(Target {
+name: endpoint.name,
+origin: endpoint.origin.unwrap_or(Origin::UserCreated),
+endpoint_type: EndpointType::Sendmail,
+disable: endpoint.disable,
+comment: endpoint.comment,
+})
+}
+
+#[cfg(feature = "smtp")]
+for endpoint in smtp::get_endpoints(config)? {
+targets.push(Target {
+name: endpoint.name,
+origin: endpoint.origin.unwrap_or(Origin::UserCreated),
+endpoint_type: EndpointType::Smtp,
+disable: endpoint.disable,
+comment: endpoint.comment,
+})
+}
+
+Ok(targets)
+}
+
 fn verify_digest(config: , digest: Option<&[u8]>) -> Result<(), 
HttpError> {
 if let Some(digest) = digest {
 if config.digest != *digest {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 07/20] notify: give each notification a unique ID

2024-04-19 Thread Lukas Wagner
We need this for queuing notifications on PBS from the unprivileged
proxy process.

Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/Cargo.toml |  1 +
 proxmox-notify/src/lib.rs | 11 +++
 2 files changed, 12 insertions(+)

diff --git a/proxmox-notify/Cargo.toml b/proxmox-notify/Cargo.toml
index 185b50a..797b1ac 100644
--- a/proxmox-notify/Cargo.toml
+++ b/proxmox-notify/Cargo.toml
@@ -28,6 +28,7 @@ proxmox-section-config = { workspace = true }
 proxmox-serde.workspace = true
 proxmox-sys = { workspace = true, optional = true }
 proxmox-time.workspace = true
+proxmox-uuid = { workspace = true, features = ["serde"] }
 
 [features]
 default = ["sendmail", "gotify", "smtp"]
diff --git a/proxmox-notify/src/lib.rs b/proxmox-notify/src/lib.rs
index 35dcb17..91c0b61 100644
--- a/proxmox-notify/src/lib.rs
+++ b/proxmox-notify/src/lib.rs
@@ -10,6 +10,7 @@ use serde_json::Value;
 
 use proxmox_schema::api;
 use proxmox_section_config::SectionConfigData;
+use proxmox_uuid::Uuid;
 
 pub mod matcher;
 use crate::config::CONFIG;
@@ -198,6 +199,8 @@ pub struct Notification {
 content: Content,
 /// Metadata
 metadata: Metadata,
+/// Unique ID
+id: Uuid,
 }
 
 impl Notification {
@@ -217,6 +220,7 @@ impl Notification {
 template_name: template_name.as_ref().to_string(),
 data: template_data,
 },
+id: Uuid::generate(),
 }
 }
 #[cfg(feature = "mail-forwarder")]
@@ -246,8 +250,14 @@ impl Notification {
 additional_fields,
 timestamp: proxmox_time::epoch_i64(),
 },
+id: Uuid::generate(),
 })
 }
+
+/// Return the unique ID of this notification.
+pub fn id() ->  {
+
+}
 }
 
 /// Notification configuration
@@ -548,6 +558,7 @@ impl Bus {
 template_name: "test".to_string(),
 data: json!({ "target": target }),
 },
+id: Uuid::generate(),
 };
 
 if let Some(endpoint) = self.endpoints.get(target) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v2 20/20] notifications: use named templates instead of in-code templates

2024-04-19 Thread Lukas Wagner
This commit adapts notification sending for
- package update
- replication
- backups

to use named templates (installed in /usr/share/pve-manager/templates)
instead of passing template strings defined in code to the
notification stack.

Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 Makefile  |  2 +-
 PVE/API2/APT.pm   |  9 +--
 PVE/API2/Replication.pm   | 20 +---
 PVE/VZDump.pm | 20 ++--
 templates/Makefile| 24 +++
 .../default/package-updates-body.html.hbs |  6 +
 .../default/package-updates-body.txt.hbs  |  3 +++
 .../default/package-updates-subject.txt.hbs   |  1 +
 templates/default/replication-body.html.hbs   | 18 ++
 templates/default/replication-body.txt.hbs| 12 ++
 templates/default/replication-subject.txt.hbs |  1 +
 templates/default/test-body.html.hbs  |  1 +
 templates/default/test-body.txt.hbs   |  1 +
 templates/default/test-subject.txt.hbs|  1 +
 templates/default/vzdump-body.html.hbs| 11 +
 templates/default/vzdump-body.txt.hbs | 10 
 templates/default/vzdump-subject.txt.hbs  |  1 +
 17 files changed, 95 insertions(+), 46 deletions(-)
 create mode 100644 templates/Makefile
 create mode 100644 templates/default/package-updates-body.html.hbs
 create mode 100644 templates/default/package-updates-body.txt.hbs
 create mode 100644 templates/default/package-updates-subject.txt.hbs
 create mode 100644 templates/default/replication-body.html.hbs
 create mode 100644 templates/default/replication-body.txt.hbs
 create mode 100644 templates/default/replication-subject.txt.hbs
 create mode 100644 templates/default/test-body.html.hbs
 create mode 100644 templates/default/test-body.txt.hbs
 create mode 100644 templates/default/test-subject.txt.hbs
 create mode 100644 templates/default/vzdump-body.html.hbs
 create mode 100644 templates/default/vzdump-body.txt.hbs
 create mode 100644 templates/default/vzdump-subject.txt.hbs

diff --git a/Makefile b/Makefile
index 28295395..337682b3 100644
--- a/Makefile
+++ b/Makefile
@@ -10,7 +10,7 @@ DSC=$(PACKAGE)_$(DEB_VERSION).dsc
 DEB=$(PACKAGE)_$(DEB_VERSION)_$(DEB_HOST_ARCH).deb
 
 DESTDIR=
-SUBDIRS = aplinfo PVE bin www services configs network-hooks test
+SUBDIRS = aplinfo PVE bin www services configs network-hooks test templates
 
 all: $(SUBDIRS)
set -e && for i in $(SUBDIRS); do $(MAKE) -C $$i; done
diff --git a/PVE/API2/APT.pm b/PVE/API2/APT.pm
index 19f0baca..d0e7c544 100644
--- a/PVE/API2/APT.pm
+++ b/PVE/API2/APT.pm
@@ -238,12 +238,6 @@ __PACKAGE__->register_method({
return $pkglist;
 }});
 
-my $updates_available_subject_template = "New software packages available 
({{hostname}})";
-my $updates_available_body_template =  'update_database',
 path => 'update',
@@ -358,8 +352,7 @@ __PACKAGE__->register_method({
};
 
PVE::Notify::info(
-   $updates_available_subject_template,
-   $updates_available_body_template,
+   "package-updates",
$template_data,
$metadata_fields,
);
diff --git a/PVE/API2/Replication.pm b/PVE/API2/Replication.pm
index 0dc944c9..d84ac1ab 100644
--- a/PVE/API2/Replication.pm
+++ b/PVE/API2/Replication.pm
@@ -92,23 +92,6 @@ my sub _should_mail_at_failcount {
 return $i * 48 == $fail_count;
 };
 
-my $replication_error_subject_template = "Replication Job: '{{job-id}}' 
failed";
-my $replication_error_body_template = < 1024*1024;
 
 sub send_notification {
@@ -565,8 +551,7 @@ sub send_notification {
 
PVE::Notify::notify(
$severity,
-   $subject_template,
-   $body_template,
+   "vzdump",
$notification_props,
$fields,
$notification_config
@@ -577,8 +562,7 @@ sub send_notification {
# no email addresses were configured.
PVE::Notify::notify(
$severity,
-   $subject_template,
-   $body_template,
+   "vzdump",
$notification_props,
$fields,
);
diff --git a/templates/Makefile b/templates/Makefile
new file mode 100644
index ..236988c5
--- /dev/null
+++ b/templates/Makefile
@@ -0,0 +1,24 @@
+NOTIFICATION_TEMPLATES=\
+   default/test-subject.txt.hbs\
+   default/test-body.txt.hbs   \
+   default/test-body.html.hbs  \
+   default/vzdump-subject.txt.hbs  \
+   default/vzdump-body.txt.hbs \
+   default/vzdump-body.html.hbs\
+   

[pve-devel] [PATCH pve-ha-manager v2 17/20] env: notify: use named templates instead of passing template strings

2024-04-19 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 debian/pve-ha-manager.install |  3 +++
 src/Makefile  |  1 +
 src/PVE/HA/Env/PVE2.pm|  4 ++--
 src/PVE/HA/NodeStatus.pm  | 20 +--
 src/PVE/HA/Sim/Env.pm |  3 ++-
 src/templates/Makefile| 10 ++
 src/templates/default/fencing-body.html.hbs   | 14 +
 src/templates/default/fencing-body.txt.hbs| 11 ++
 src/templates/default/fencing-subject.txt.hbs |  1 +
 9 files changed, 45 insertions(+), 22 deletions(-)
 create mode 100644 src/templates/Makefile
 create mode 100644 src/templates/default/fencing-body.html.hbs
 create mode 100644 src/templates/default/fencing-body.txt.hbs
 create mode 100644 src/templates/default/fencing-subject.txt.hbs

diff --git a/debian/pve-ha-manager.install b/debian/pve-ha-manager.install
index a7598a9..0ffbd8d 100644
--- a/debian/pve-ha-manager.install
+++ b/debian/pve-ha-manager.install
@@ -38,3 +38,6 @@
 /usr/share/perl5/PVE/HA/Usage/Static.pm
 /usr/share/perl5/PVE/Service/pve_ha_crm.pm
 /usr/share/perl5/PVE/Service/pve_ha_lrm.pm
+/usr/share/pve-manager/templates/default/fencing-body.html.hbs
+/usr/share/pve-manager/templates/default/fencing-body.txt.hbs
+/usr/share/pve-manager/templates/default/fencing-subject.txt.hbs
diff --git a/src/Makefile b/src/Makefile
index 87bb0de..56bd360 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -73,6 +73,7 @@ install: watchdog-mux pve-ha-crm pve-ha-lrm ha-manager.1 
pve-ha-crm.8 pve-ha-lrm
install -d $(DESTDIR)/$(MAN1DIR)
install -m 0644 ha-manager.1 $(DESTDIR)/$(MAN1DIR)
gzip -9 $(DESTDIR)/$(MAN1DIR)/ha-manager.1
+   $(MAKE) -C templates $@
 
 .PHONY: test
 test: 
diff --git a/src/PVE/HA/Env/PVE2.pm b/src/PVE/HA/Env/PVE2.pm
index fcb60a9..cb73bcf 100644
--- a/src/PVE/HA/Env/PVE2.pm
+++ b/src/PVE/HA/Env/PVE2.pm
@@ -221,10 +221,10 @@ sub log {
 }
 
 sub send_notification {
-my ($self, $subject, $text, $template_data, $metadata_fields) = @_;
+my ($self, $template_name, $template_data, $metadata_fields) = @_;
 
 eval {
-   PVE::Notify::error($subject, $text, $template_data, $metadata_fields);
+   PVE::Notify::error($template_name, $template_data, $metadata_fields);
 };
 
 $self->log("warning", "could not notify: $@") if $@;
diff --git a/src/PVE/HA/NodeStatus.pm b/src/PVE/HA/NodeStatus.pm
index e053c55..9e6d898 100644
--- a/src/PVE/HA/NodeStatus.pm
+++ b/src/PVE/HA/NodeStatus.pm
@@ -188,23 +188,6 @@ sub update {
}
 }
 
-my $body_template = {"subject-prefix"}/r;
 $subject = $subject =~ s/\{\{subject}}/$properties->{"subject"}/r;
 
diff --git a/src/templates/Makefile b/src/templates/Makefile
new file mode 100644
index 000..396759f
--- /dev/null
+++ b/src/templates/Makefile
@@ -0,0 +1,10 @@
+NOTIFICATION_TEMPLATES=
\
+   default/fencing-subject.txt.hbs \
+   default/fencing-body.txt.hbs\
+   default/fencing-body.html.hbs   \
+
+.PHONY: install
+install:
+   install -dm 0755 $(DESTDIR)/usr/share/pve-manager/templates/default
+   $(foreach i,$(NOTIFICATION_TEMPLATES), \
+   install -m644 $(i) $(DESTDIR)/usr/share/pve-manager/templates/$(i) 
;)
diff --git a/src/templates/default/fencing-body.html.hbs 
b/src/templates/default/fencing-body.html.hbs
new file mode 100644
index 000..1420348
--- /dev/null
+++ b/src/templates/default/fencing-body.html.hbs
@@ -0,0 +1,14 @@
+
+
+The node '{{node}}' failed and needs manual intervention.
+
+The PVE HA manager tries to fence it and recover the configured HA 
resources to
+a healthy node if possible.
+
+Current fence status: {{subject-prefix}}
+{{subject}}
+
+Overall Cluster status:
+{{object status-data}}
+
+
diff --git a/src/templates/default/fencing-body.txt.hbs 
b/src/templates/default/fencing-body.txt.hbs
new file mode 100644
index 000..e46a1fd
--- /dev/null
+++ b/src/templates/default/fencing-body.txt.hbs
@@ -0,0 +1,11 @@
+The node '{{node}}' failed and needs 

[pve-devel] [PATCH manager v2 19/20] tests: remove vzdump_notification test

2024-04-19 Thread Lukas Wagner
With the upcoming changes in how we send notifications, this one
really becomes pretty annoying to keep working. The location where
templates are looked up are defined in the proxmox_notify crate, so
there is no easy way to mock this for testing.
The test itself seemed not super valuable, mainly testing if
the backup logs are shortened if they ware too long - so they are just
removed.

Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 test/Makefile|   6 +-
 test/vzdump_notification_test.pl | 101 ---
 2 files changed, 1 insertion(+), 106 deletions(-)
 delete mode 100755 test/vzdump_notification_test.pl

diff --git a/test/Makefile b/test/Makefile
index 62d75050..743804c8 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -5,7 +5,7 @@ all:
 export PERLLIB=..
 
 .PHONY: check
-check: test-replication test-balloon test-vzdump-notification test-vzdump 
test-osd
+check: test-replication test-balloon test-vzdump test-osd
 
 .PHONY: test-balloon
 test-balloon:
@@ -17,10 +17,6 @@ test-replication: replication1.t replication2.t 
replication3.t replication4.t re
 replication%.t: replication_test%.pl
./$<
 
-.PHONY: test-vzdump-notification
-test-vzdump-notification:
-   ./vzdump_notification_test.pl
-
 .PHONY: test-vzdump
 test-vzdump: test-vzdump-guest-included test-vzdump-new
 
diff --git a/test/vzdump_notification_test.pl b/test/vzdump_notification_test.pl
deleted file mode 100755
index 631606bb..
--- a/test/vzdump_notification_test.pl
+++ /dev/null
@@ -1,101 +0,0 @@
-#!/usr/bin/perl
-
-use strict;
-use warnings;
-
-use lib '..';
-
-use Test::More tests => 3;
-use Test::MockModule;
-
-use PVE::VZDump;
-
-my $STATUS = qr/.*status.*/;
-my $NO_LOGFILE = qr/.*Could not open log file.*/;
-my $LOG_TOO_LONG = qr/.*Log output was too long.*/;
-my $TEST_FILE_PATH   = '/tmp/mail_test';
-my $TEST_FILE_WRONG_PATH = '/tmp/mail_test_wrong';
-
-sub prepare_mail_with_status {
-open(TEST_FILE, '>', $TEST_FILE_PATH); # Removes previous content
-print TEST_FILE "start of log file\n";
-print TEST_FILE "status: 0\% this should not be in the mail\n";
-print TEST_FILE "status: 55\% this should not be in the mail\n";
-print TEST_FILE "status: 100\% this should not be in the mail\n";
-print TEST_FILE "end of log file\n";
-close(TEST_FILE);
-}
-
-sub prepare_long_mail {
-open(TEST_FILE, '>', $TEST_FILE_PATH); # Removes previous content
-# 0.5 MB * 2 parts + the overview tables gives more than 1 MB mail
-print TEST_FILE "a" x (1024*1024);
-close(TEST_FILE);
-}
-
-my $result_text;
-my $result_properties;
-
-my $mock_notification_module = Test::MockModule->new('PVE::Notify');
-my $mocked_notify = sub {
-my ($severity, $title, $text, $properties, $metadata) = @_;
-
-$result_text = $text;
-$result_properties = $properties;
-};
-my $mocked_notify_short = sub {
-my (@params) = @_;
-return $mocked_notify->('', @params);
-};
-
-$mock_notification_module->mock(
-'notify' => $mocked_notify,
-'info' => $mocked_notify_short,
-'notice' => $mocked_notify_short,
-'warning' => $mocked_notify_short,
-'error' => $mocked_notify_short,
-);
-$mock_notification_module->mock('cfs_read_file', sub {
-my $path = shift;
-
-if ($path eq 'datacenter.cfg') {
-return {};
-} elsif ($path eq 'notifications.cfg' || $path eq 
'priv/notifications.cfg') {
-return '';
-} else {
-   die "unexpected cfs_read_file\n";
-}
-});
-
-my $MAILTO = ['test_addr...@proxmox.com'];
-my $SELF = {
-opts => { mailto => $MAILTO },
-cmdline => 'test_command_on_cli',
-};
-
-my $task = { state => 'ok', vmid => '100', };
-my $tasklist;
-sub prepare_test {
-$result_text = undef;
-$task->{tmplog} = shift;
-$tasklist = [ $task ];
-}
-
-{
-prepare_test($TEST_FILE_WRONG_PATH);
-PVE::VZDump::send_notification($SELF, $tasklist, 0, undef, undef, undef);
-like($result_properties->{logs}, $NO_LOGFILE, "Missing logfile is 
detected");
-}
-{
-prepare_test($TEST_FILE_PATH);
-prepare_mail_with_status();
-PVE::VZDump::send_notification($SELF, $tasklist, 0, undef, undef, undef);
-unlike($result_properties->{"status-text"}, $STATUS, "Status are not in 
text part of mails");
-}
-{
-prepare_test($TEST_FILE_PATH);
-prepare_long_mail();
-PVE::VZDump::send_notification($SELF, $tasklist, 0, undef, undef, undef);
-like($result_properties->{logs}, $LOG_TOO_LONG, "Text part of mails gets 
shortened");
-}
-unlink $TEST_FILE_PATH;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH cluster v2 16/20] notify: use named template instead of passing template strings

2024-04-19 Thread Lukas Wagner
The notification system will now load template files from a defined
location. The template to use is now passed to proxmox_notify, instead
of separate template strings for subject/body.

Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 src/PVE/Notify.pm | 29 -
 1 file changed, 12 insertions(+), 17 deletions(-)

diff --git a/src/PVE/Notify.pm b/src/PVE/Notify.pm
index 872eb25..c514111 100644
--- a/src/PVE/Notify.pm
+++ b/src/PVE/Notify.pm
@@ -58,17 +58,16 @@ sub write_config {
 }
 
 my $send_notification = sub {
-my ($severity, $title, $message, $template_data, $fields, $config) = @_;
+my ($severity, $template_name, $template_data, $fields, $config) = @_;
 $config = read_config() if !defined($config);
-$config->send($severity, $title, $message, $template_data, $fields);
+$config->send($severity, $template_name, $template_data, $fields);
 };
 
 sub notify {
-my ($severity, $title, $message, $template_data, $fields, $config) = @_;
+my ($severity, $template_name, $template_data, $fields, $config) = @_;
 $send_notification->(
 $severity,
-$title,
-$message,
+$template_name,
 $template_data,
 $fields,
 $config
@@ -76,11 +75,10 @@ sub notify {
 }
 
 sub info {
-my ($title, $message, $template_data, $fields, $config) = @_;
+my ($template_name, $template_data, $fields, $config) = @_;
 $send_notification->(
 'info',
-$title,
-$message,
+$template_name,
 $template_data,
 $fields,
 $config
@@ -88,11 +86,10 @@ sub info {
 }
 
 sub notice {
-my ($title, $message, $template_data, $fields, $config) = @_;
+my ($template_name, $template_data, $fields, $config) = @_;
 $send_notification->(
 'notice',
-$title,
-$message,
+$template_name,
 $template_data,
 $fields,
 $config
@@ -100,11 +97,10 @@ sub notice {
 }
 
 sub warning {
-my ($title, $message, $template_data, $fields, $config) = @_;
+my ($template_name, $template_data, $fields, $config) = @_;
 $send_notification->(
 'warning',
-$title,
-$message,
+$template_name,
 $template_data,
 $fields,
 $config
@@ -112,11 +108,10 @@ sub warning {
 }
 
 sub error {
-my ($title, $message, $template_data, $fields, $config) = @_;
+my ($template_name, $template_data, $fields, $config) = @_;
 $send_notification->(
 'error',
-$title,
-$message,
+$template_name,
 $template_data,
 $fields,
 $config
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 04/20] notify: don't make tests require pve-context

2024-04-19 Thread Lukas Wagner
Tests now have their own context, so requiring pve-context is not
necessary any more.

Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/api/gotify.rs   |  2 +-
 proxmox-notify/src/api/matcher.rs  |  2 +-
 proxmox-notify/src/api/sendmail.rs |  2 +-
 proxmox-notify/src/api/smtp.rs | 24 
 4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/proxmox-notify/src/api/gotify.rs b/proxmox-notify/src/api/gotify.rs
index 15d94cb..92151f5 100644
--- a/proxmox-notify/src/api/gotify.rs
+++ b/proxmox-notify/src/api/gotify.rs
@@ -165,7 +165,7 @@ fn remove_private_config_entry(config:  Config, name: 
) -> Result<(), Ht
 Ok(())
 }
 
-#[cfg(all(feature = "pve-context", test))]
+#[cfg(test)]
 mod tests {
 use super::*;
 use crate::api::test_helpers::empty_config;
diff --git a/proxmox-notify/src/api/matcher.rs 
b/proxmox-notify/src/api/matcher.rs
index 63ec73d..fa11633 100644
--- a/proxmox-notify/src/api/matcher.rs
+++ b/proxmox-notify/src/api/matcher.rs
@@ -148,7 +148,7 @@ pub fn delete_matcher(config:  Config, name: ) -> 
Result<(), HttpError>
 Ok(())
 }
 
-#[cfg(all(test, feature = "sendmail", feature = "pve-context"))]
+#[cfg(all(test, feature = "sendmail"))]
 mod tests {
 use super::*;
 use crate::matcher::MatchModeOperator;
diff --git a/proxmox-notify/src/api/sendmail.rs 
b/proxmox-notify/src/api/sendmail.rs
index c20a3e5..47588af 100644
--- a/proxmox-notify/src/api/sendmail.rs
+++ b/proxmox-notify/src/api/sendmail.rs
@@ -151,7 +151,7 @@ pub fn delete_endpoint(config:  Config, name: ) -> 
Result<(), HttpError>
 Ok(())
 }
 
-#[cfg(all(feature = "pve-context", test))]
+#[cfg(test)]
 pub mod tests {
 use super::*;
 use crate::api::test_helpers::*;
diff --git a/proxmox-notify/src/api/smtp.rs b/proxmox-notify/src/api/smtp.rs
index 7a58677..1b4700e 100644
--- a/proxmox-notify/src/api/smtp.rs
+++ b/proxmox-notify/src/api/smtp.rs
@@ -200,7 +200,7 @@ pub fn delete_endpoint(config:  Config, name: ) -> 
Result<(), HttpError>
 Ok(())
 }
 
-#[cfg(all(feature = "pve-context", test))]
+#[cfg(test)]
 pub mod tests {
 use super::*;
 use crate::api::test_helpers::*;
@@ -348,15 +348,15 @@ pub mod tests {
 Ok(())
 }
 
-// #[test]
-// fn test_delete() -> Result<(), HttpError> {
-// let mut config = empty_config();
-// add_smtp_endpoint_for_test( config, "smtp-endpoint")?;
-//
-// delete_endpoint( config, "smtp-endpoint")?;
-// assert!(delete_endpoint( config, "smtp-endpoint").is_err());
-// assert_eq!(get_endpoints()?.len(), 0);
-//
-// Ok(())
-// }
+#[test]
+fn test_delete() -> Result<(), HttpError> {
+let mut config = empty_config();
+add_smtp_endpoint_for_test( config, "smtp-endpoint")?;
+
+delete_endpoint( config, "smtp-endpoint")?;
+assert!(delete_endpoint( config, "smtp-endpoint").is_err());
+assert_eq!(get_endpoints()?.len(), 0);
+
+Ok(())
+}
 }
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v2 18/20] gitignore: ignore any test artifacts

2024-04-19 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 .gitignore | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/.gitignore b/.gitignore
index e8d1eb27..481ae1e0 100644
--- a/.gitignore
+++ b/.gitignore
@@ -9,3 +9,5 @@ dest/
 /www/mobile/pvemanager-mobile.js
 /www/touch/touch-[0-9]*/
 /pve-manager-[0-9]*/
+/test/.mocked_*
+/test/*.tmp
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 12/20] notify: renderer: add relative-percentage helper from PBS

2024-04-19 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/src/renderer/mod.rs | 29 +
 1 file changed, 29 insertions(+)

diff --git a/proxmox-notify/src/renderer/mod.rs 
b/proxmox-notify/src/renderer/mod.rs
index a51ece6..ddb241d 100644
--- a/proxmox-notify/src/renderer/mod.rs
+++ b/proxmox-notify/src/renderer/mod.rs
@@ -73,6 +73,30 @@ fn value_to_timestamp(val: ) -> Option {
 proxmox_time::strftime_local("%F %H:%M:%S", timestamp).ok()
 }
 
+fn handlebars_relative_percentage_helper(
+h: ,
+_: ,
+_: ,
+_rc:  RenderContext,
+out:  dyn Output,
+) -> HelperResult {
+let param0 = h
+.param(0)
+.and_then(|v| v.value().as_f64())
+.ok_or_else(|| HandlebarsRenderError::new("relative-percentage: param0 
not found"))?;
+let param1 = h
+.param(1)
+.and_then(|v| v.value().as_f64())
+.ok_or_else(|| HandlebarsRenderError::new("relative-percentage: param1 
not found"))?;
+
+if param1 == 0.0 {
+out.write("-")?;
+} else {
+out.write(!("{:.2}%", (param0 * 100.0) / param1))?;
+}
+Ok(())
+}
+
 /// Available render functions for `serde_json::Values``
 ///
 /// May be used as a handlebars helper, e.g.
@@ -237,6 +261,11 @@ fn render_template_impl(
 
 ValueRenderFunction::register_helpers( handlebars);
 
+handlebars.register_helper(
+"relative-percentage",
+Box::new(handlebars_relative_percentage_helper),
+);
+
 let rendered_template = handlebars
 .render_template(template, data)
 .map_err(|err| Error::RenderError(err.into()))?;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 06/20] notify: cargo.toml: add spaces before curly braces

2024-04-19 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/Cargo.toml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/proxmox-notify/Cargo.toml b/proxmox-notify/Cargo.toml
index 3e8d253..185b50a 100644
--- a/proxmox-notify/Cargo.toml
+++ b/proxmox-notify/Cargo.toml
@@ -17,13 +17,13 @@ log.workspace = true
 mail-parser = { workspace = true, optional = true }
 openssl.workspace = true
 regex.workspace = true
-serde = { workspace = true, features = ["derive"]}
+serde = { workspace = true, features = ["derive"] }
 serde_json.workspace = true
 
 proxmox-http = { workspace = true, features = ["client-sync"], optional = true 
}
 proxmox-http-error.workspace = true
 proxmox-human-byte.workspace = true
-proxmox-schema = { workspace = true, features = ["api-macro", "api-types"]}
+proxmox-schema = { workspace = true, features = ["api-macro", "api-types"] }
 proxmox-section-config = { workspace = true }
 proxmox-serde.workspace = true
 proxmox-sys = { workspace = true, optional = true }
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox v2 05/20] notify: make the `mail-forwarder` feature depend on proxmox-sys

2024-04-19 Thread Lukas Wagner
It uses proxmox_sys::nodename - the dep is needed, otherwise the code
does not compile in some feature flag permutations.

Signed-off-by: Lukas Wagner 
Tested-by: Folke Gleumes 
Reviewed-by: Fiona Ebner 
---
 proxmox-notify/Cargo.toml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/proxmox-notify/Cargo.toml b/proxmox-notify/Cargo.toml
index 52a466e..3e8d253 100644
--- a/proxmox-notify/Cargo.toml
+++ b/proxmox-notify/Cargo.toml
@@ -31,7 +31,7 @@ proxmox-time.workspace = true
 
 [features]
 default = ["sendmail", "gotify", "smtp"]
-mail-forwarder = ["dep:mail-parser"]
+mail-forwarder = ["dep:mail-parser", "dep:proxmox-sys"]
 sendmail = ["dep:proxmox-sys"]
 gotify = ["dep:proxmox-http"]
 pve-context = ["dep:proxmox-sys"]
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH many v2 00/20] notifications: move template strings to template files; PBS preparations

2024-04-19 Thread Lukas Wagner
The notification system uses handlebar templates to render the subject
and the body of notifications. Previously, the template strings were
defined inline at the call site. This patch series extracts the templates
into template files and installs them at
  /usr/share/pve-manager/templates/default

where they stored as -{body,subject}.{txt,html}.hbs

The 'default' part in the path is a preparation for translated
notifications and/or overridable notification templates.
Future work could provide notifications translated to e.g. German
in `templates/de` or similar. This will be a first for having
translated strings on the backend-side, so there is need for further
research.

The patches for `proxmox` also include some preparatory patches for
the upcoming integration of the notification system into PBS. They
are not needed for PVE, but I included them here since Folke and I
tested the PVE changes with them applied. IMO they should just be
applied with the same version bump.
The list of optional, preparatory patches is:
  notify: give each notification a unique ID
  notify: api: add get_targets
  notify: derive `api` for Deleteable*Property
  notify: derive Deserialize/Serialize for Notification struct
  notify: pbs context: include nodename in default sendmail author
  notify: renderer: add relative-percentage helper from PBS

Folke kindly did some off-list testing before this was posted, hence
his T-bs were included.

Bumps/dependencies:
  - proxmox_notify
  - libproxmox-rs-perl/libpve-rs-perl (needs bumped proxmox_notify)
  - libpve-notify-perl  (needs bumped libproxmox-rs-perl/libpve-rs-perl)
  - pve-ha-manager (needs bumped libpve-notify-perl)
  - pve-manager (needs bumped libpve-notify-perl)
  - proxmox-mail-forward (optional. should not be affected by any changes, 
but I think
it should be also be bumped for any larger proxmox_notify changes to 
avoid any
weird hidden regressions due to mismatches of proxmox_notify

Changes since v1:
  - Incorporated feedback from @Fiona and @Fabian - thanks!
- most noteworthy: Change template path from /usr/share/proxmox-ve
  to /usr/share/
- apart from that mostly just cosmetics/style

proxmox:

Lukas Wagner (12):
  notify: switch to file-based templating system
  notify: make api methods take config struct ownership
  notify: convert Option> -> Vec in config structs
  notify: don't make tests require pve-context
  notify: make the `mail-forwarder` feature depend on proxmox-sys
  notify: cargo.toml: add spaces before curly braces
  notify: give each notification a unique ID
  notify: api: add get_targets
  notify: derive `api` for Deleteable*Property
  notify: derive Deserialize/Serialize for Notification struct
  notify: pbs context: include nodename in default sendmail author
  notify: renderer: add relative-percentage helper from PBS

 proxmox-notify/Cargo.toml   |   7 +-
 proxmox-notify/examples/render.rs   |  63 --
 proxmox-notify/src/api/gotify.rs|  48 ++---
 proxmox-notify/src/api/matcher.rs   |  59 +++--
 proxmox-notify/src/api/mod.rs   | 113 --
 proxmox-notify/src/api/sendmail.rs  |  60 +++---
 proxmox-notify/src/api/smtp.rs  | 122 +--
 proxmox-notify/src/context/mod.rs   |  10 +-
 proxmox-notify/src/context/pbs.rs   |  18 +-
 proxmox-notify/src/context/pve.rs   |  15 ++
 proxmox-notify/src/context/test.rs  |   9 +
 proxmox-notify/src/endpoints/common/mail.rs |  20 +-
 proxmox-notify/src/endpoints/gotify.rs  |  12 +-
 proxmox-notify/src/endpoints/sendmail.rs|  34 +--
 proxmox-notify/src/endpoints/smtp.rs|  38 ++--
 proxmox-notify/src/lib.rs   |  59 ++---
 proxmox-notify/src/matcher.rs   |  71 +++---
 proxmox-notify/src/renderer/html.rs |  14 --
 proxmox-notify/src/renderer/mod.rs  | 226 
 proxmox-notify/src/renderer/plaintext.rs|  39 
 20 files changed, 506 insertions(+), 531 deletions(-)
 delete mode 100644 proxmox-notify/examples/render.rs


proxmox-perl-rs:

Lukas Wagner (3):
  notify: use file based notification templates
  notify: don't pass config structs by reference
  notify: adapt to Option> to Vec changes in proxmox_notify

 common/src/notify.rs | 48 +---
 1 file changed, 23 insertions(+), 25 deletions(-)


pve-cluster:

Lukas Wagner (1):
  notify: use named template instead of passing template strings

 src/PVE/Notify.pm | 29 -
 1 file changed, 12 insertions(+), 17 deletions(-)


pve-ha-manager:

Lukas Wagner (1):
  env: notify: use named templates instead of passing template strings

 debian/pve-ha-manager.install |  3 +++
 src/Makefile  |  1 +
 src/PVE/HA/Env/PVE2.pm|  4 ++--
 src/PVE/HA/NodeStatus.pm  | 20 

Re: [pve-devel] [PATCH many 00/19] notifications: move template strings to template files; PBS preparations

2024-04-19 Thread Fiona Ebner
Am 19.04.24 um 12:09 schrieb Fiona Ebner:
>>
> 
> Versioned breaks required:
> - new pve-cluster will break old pve-manager and old pve-ha-manager
> - new libproxmox-rs-perl/libpve-rs-perl will break old pve-cluster
> 

Since I got myself confused for a moment when re-installing and it never
hurts to avoid confusion: the relevant package is libpve-notify-perl
(which lives in the pve-cluster repo)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 08/16] api: notification: add API for getting known metadata fields/values

2024-04-19 Thread Lukas Wagner



On  2024-04-19 15:45, Fiona Ebner wrote:
> Am 15.04.24 um 10:26 schrieb Lukas Wagner:
>> +
>> +__PACKAGE__->register_method ({
>> +name => 'get_field_values',
>> +path => 'values',
>> +method => 'GET',
>> +description => 'Returns known notification metadata fields and their 
>> known values',
>> +permissions => {
>> +check => ['or',
>> +['perm', '/mapping/notifications', ['Mapping.Modify']],
>> +['perm', '/mapping/notifications', ['Mapping.Audit']],
>> +],
>> +},
>> +protected => 1,
>> +parameters => {
>> +additionalProperties => 0,
>> +},
>> +returns => {
>> +type => 'array',
>> +items => {
>> +type => 'object',
>> +properties => {
>> +'value' => {
>> +description => 'Notification metadata value known by the 
>> system.',
>> +type => 'string'
>> +},
>> +'comment' => {
>> +description => 'Additional comment for this value.',
>> +type => 'string',
>> +optional => 1,
>> +},
>> +'field' => {
>> +description => 'Field this value belongs to.',
>> +type => 'string',
>> +optional => 1,
>> +},
>> +'internal' => {
>> +description => 'Set if "value" was generated by the system 
>> and can'
>> +   . ' safely be used as base for translations.',
>> +type => 'boolean',
>> +optional => 1,
>> +},
> 
> And wouldn't it be nicer to return already grouped by field? Then maybe
> we also don't really need the dedicated fields API endpoint either as
> those are just the top-level of the result (with empty array when there
> are no values so we don't ever miss any fields).
> 

The design of both endpoints was mostly driven by the intention
of keeping the ExtJS side as simple as possible.
Two comboboxes, each with their own api endpoint to fetch data from,
one setting a filter for the other one.
I tried using a single endpoint at first, but was quickly frustrated
by ExtJS and its documentation and settled for this approach as a consequence.

So I'd prefer to leave it as is :D

Regarding the 'internal' flag: Yes, you are right, right now we only need it 
for 'type'.
I'll leave it out then and handle everything in the UI.

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 08/16] api: notification: add API for getting known metadata fields/values

2024-04-19 Thread Fiona Ebner
Am 15.04.24 um 10:26 schrieb Lukas Wagner:
> +
> +__PACKAGE__->register_method ({
> +name => 'get_field_values',
> +path => 'values',
> +method => 'GET',
> +description => 'Returns known notification metadata fields and their 
> known values',
> +permissions => {
> + check => ['or',
> + ['perm', '/mapping/notifications', ['Mapping.Modify']],
> + ['perm', '/mapping/notifications', ['Mapping.Audit']],
> + ],
> +},
> +protected => 1,
> +parameters => {
> + additionalProperties => 0,
> +},
> +returns => {
> + type => 'array',
> + items => {
> + type => 'object',
> + properties => {
> + 'value' => {
> + description => 'Notification metadata value known by the 
> system.',
> + type => 'string'
> + },
> + 'comment' => {
> + description => 'Additional comment for this value.',
> + type => 'string',
> + optional => 1,
> + },
> + 'field' => {
> + description => 'Field this value belongs to.',
> + type => 'string',
> + optional => 1,
> + },
> + 'internal' => {
> + description => 'Set if "value" was generated by the system 
> and can'
> +. ' safely be used as base for translations.',
> + type => 'boolean',
> + optional => 1,
> + },

And wouldn't it be nicer to return already grouped by field? Then maybe
we also don't really need the dedicated fields API endpoint either as
those are just the top-level of the result (with empty array when there
are no values so we don't ever miss any fields).

> + },
> + },
> +},


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 05/16] api: replication: add 'replication-job' to notification metadata

2024-04-19 Thread Lukas Wagner



On  2024-04-19 15:11, Fiona Ebner wrote:
> Am 19.04.24 um 14:22 schrieb Lukas Wagner:
>>
>>
>> On  2024-04-19 14:02, Fiona Ebner wrote:
>>> Am 15.04.24 um 10:26 schrieb Lukas Wagner:
 This allows users to create notification match rules for specific
 replication jobs, if they so desire.

 Signed-off-by: Lukas Wagner 
 ---
  PVE/API2/Replication.pm | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/PVE/API2/Replication.pm b/PVE/API2/Replication.pm
 index 0dc944c9..703640f5 100644
 --- a/PVE/API2/Replication.pm
 +++ b/PVE/API2/Replication.pm
 @@ -140,8 +140,8 @@ my sub _handle_job_err {
  };
  
  my $metadata_fields = {
 -  # TODO: Add job-id?
type => "replication",
 +  "replication-job" => $job->{id},
  };
  
  eval {
>>>
>>> Not sure if we should use "replication-job" and "backup-job" for the
>>> metadata entries rather then just "job-id". The type is already
>>> something that can be matched, why re-do it implicitly with the field
>>> name? E.g. I want to see all jobs with -fiona- on the system, now I'd
>>> have to create a matcher rule for each job type.
>>
>> Had a 'job-id' field at first, but I *think* (can't be too sure after more 
>> than 
>> 4 months of not working on this) the reason why I changed it to this approach
>> were the replication job IDs, which look like '100-0' or similar.
>> Giving them and other job IDs a unique field made it a bit easier to
>> understand what is what when creating a matcher in the improved UI.
>>
>> For instance, if you just have 'job-id', the pre-filled combo box in the 
>> match-field edit UI might contain these values
>>
>>   - backup-gaasdgh7asdfg
>>   - 100-0
>>   - 101-0
>>
>> IMO it's a bit hard to understand that the last two are replication jobs. 
>> The separate
>> job fields make this easier.
> 
> We know that it either starts with "backup-" (or "realmsync-", should
> those get notifications), or is a replication job. So we could also just
> display something that indicates they are replication jobs (e.g.
> "replication-XYZ" or "XYZ (replication)"), until we turn replication
> jobs into proper jobs in the backend. Otherwise, each job type we add
> will just have a new matcher field for its ID.

Ok, fine with me. :)

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 05/16] api: replication: add 'replication-job' to notification metadata

2024-04-19 Thread Fiona Ebner
Am 19.04.24 um 14:22 schrieb Lukas Wagner:
> 
> 
> On  2024-04-19 14:02, Fiona Ebner wrote:
>> Am 15.04.24 um 10:26 schrieb Lukas Wagner:
>>> This allows users to create notification match rules for specific
>>> replication jobs, if they so desire.
>>>
>>> Signed-off-by: Lukas Wagner 
>>> ---
>>>  PVE/API2/Replication.pm | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/PVE/API2/Replication.pm b/PVE/API2/Replication.pm
>>> index 0dc944c9..703640f5 100644
>>> --- a/PVE/API2/Replication.pm
>>> +++ b/PVE/API2/Replication.pm
>>> @@ -140,8 +140,8 @@ my sub _handle_job_err {
>>>  };
>>>  
>>>  my $metadata_fields = {
>>> -   # TODO: Add job-id?
>>> type => "replication",
>>> +   "replication-job" => $job->{id},
>>>  };
>>>  
>>>  eval {
>>
>> Not sure if we should use "replication-job" and "backup-job" for the
>> metadata entries rather then just "job-id". The type is already
>> something that can be matched, why re-do it implicitly with the field
>> name? E.g. I want to see all jobs with -fiona- on the system, now I'd
>> have to create a matcher rule for each job type.
> 
> Had a 'job-id' field at first, but I *think* (can't be too sure after more 
> than 
> 4 months of not working on this) the reason why I changed it to this approach
> were the replication job IDs, which look like '100-0' or similar.
> Giving them and other job IDs a unique field made it a bit easier to
> understand what is what when creating a matcher in the improved UI.
> 
> For instance, if you just have 'job-id', the pre-filled combo box in the 
> match-field edit UI might contain these values
> 
>   - backup-gaasdgh7asdfg
>   - 100-0
>   - 101-0
> 
> IMO it's a bit hard to understand that the last two are replication jobs. The 
> separate
> job fields make this easier.

We know that it either starts with "backup-" (or "realmsync-", should
those get notifications), or is a replication job. So we could also just
display something that indicates they are replication jobs (e.g.
"replication-XYZ" or "XYZ (replication)"), until we turn replication
jobs into proper jobs in the backend. Otherwise, each job type we add
will just have a new matcher field for its ID.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-firewall 2/2] firewall: improve systemd unit file

2024-04-19 Thread Stefan Hanreich
Explicitly mark the service as simple and remove the PIDFile
attribute, which doesn't do anything with simple services.

Signed-off-by: Stefan Hanreich 
---
 debian/proxmox-firewall.service | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/debian/proxmox-firewall.service b/debian/proxmox-firewall.service
index ad2324b..c2dc903 100644
--- a/debian/proxmox-firewall.service
+++ b/debian/proxmox-firewall.service
@@ -5,7 +5,7 @@ After=pvefw-logger.service pve-cluster.service network.target 
systemd-modules-lo
 
 [Service]
 ExecStart=/usr/libexec/proxmox/proxmox-firewall
-PIDFile=/run/proxmox-firewall.pid
+Type=simple
 Environment="RUST_LOG_STYLE=SYSTEMD"
 Environment="RUST_LOG=warn"
 
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-firewall 1/2] firewall: wait for nft process

2024-04-19 Thread Stefan Hanreich
NftClient never waits for the child process to terminate leading to
defunct leftover processes.

Signed-off-by: Stefan Hanreich 
---
 proxmox-nftables/src/client.rs | 38 --
 1 file changed, 9 insertions(+), 29 deletions(-)

diff --git a/proxmox-nftables/src/client.rs b/proxmox-nftables/src/client.rs
index 69e464b..eaa3dd2 100644
--- a/proxmox-nftables/src/client.rs
+++ b/proxmox-nftables/src/client.rs
@@ -36,35 +36,15 @@ impl NftClient {
 return Err(NftError::from(error));
 };
 
-let mut error_output = String::new();
-
-match child
-.stderr
-.take()
-.expect("can get stderr")
-.read_to_string( error_output)
-{
-Ok(_) if !error_output.is_empty() => {
-return Err(NftError::Command(error_output));
-}
-Err(error) => {
-return Err(NftError::from(error));
-}
-_ => (),
-};
-
-let mut output = String::new();
-
-if let Err(error) = child
-.stdout
-.take()
-.expect("can get stdout")
-.read_to_string( output)
-{
-return Err(NftError::from(error));
-};
-
-Ok(output)
+let output = child.wait_with_output().map_err(NftError::from)?;
+
+if output.status.success() {
+Ok(String::from_utf8(output.stdout).expect("output is valid 
utf-8"))
+} else {
+Err(NftError::Command(
+String::from_utf8(output.stderr).expect("output is valid 
utf-8"),
+))
+}
 }
 
 pub fn run_json_commands(commands: ) -> 
Result, NftError> {
-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 08/16] api: notification: add API for getting known metadata fields/values

2024-04-19 Thread Fiona Ebner
Am 15.04.24 um 10:26 schrieb Lukas Wagner:
> This new API route returns known notification metadata fields and
> a list of known possible values. This will be used by the UI to
> provide suggestions when adding/modifying match rules.
> 
> Signed-off-by: Lukas Wagner 
> ---
>  PVE/API2/Cluster/Notifications.pm | 152 ++
>  1 file changed, 152 insertions(+)
> 
> diff --git a/PVE/API2/Cluster/Notifications.pm 
> b/PVE/API2/Cluster/Notifications.pm
> index 68fdda2a..16548bec 100644
> --- a/PVE/API2/Cluster/Notifications.pm
> +++ b/PVE/API2/Cluster/Notifications.pm
> @@ -79,12 +79,164 @@ __PACKAGE__->register_method ({
>   { name => 'endpoints' },
>   { name => 'matchers' },
>   { name => 'targets' },
> + { name => 'fields' },

matcher-fields ?

> + { name => 'values' },

(matcher-)field-values ?

>   ];
>  
>   return $result;
>  }
>  });
>  
> +__PACKAGE__->register_method ({
> +name => 'get_fields',
> +path => 'fields',
> +method => 'GET',
> +description => 'Returns known notification metadata fields and their 
> known values',

It does not return their values.

> +permissions => {
> + check => ['or',
> + ['perm', '/mapping/notifications', ['Mapping.Modify']],
> + ['perm', '/mapping/notifications', ['Mapping.Audit']],
> + ],
> +},
> +protected => 1,

No need for protected when all it's doing is returning static info.
(This would have the privileged pvedaemon execute the request).

> +parameters => {
> + additionalProperties => 0,
> + properties => {},
> +},
> +returns => {
> + type => 'array',
> + items => {
> + type => 'object',
> + properties => {
> + name => {
> + description => 'Name of the field.',
> + type => 'string',
> + },
> + },
> + },
> + links => [ { rel => 'child', href => '{name}' } ],
> +},
> +code => sub {
> + # TODO: Adapt this API handler once we have a 'notification registry'
> +
> + my $result = [
> + { name => 'type' },
> + { name => 'hostname' },
> + { name => 'backup-job' },
> + { name => 'replication-job' },
> + ];
> +
> + return $result;
> +}
> +});
> +
> +__PACKAGE__->register_method ({
> +name => 'get_field_values',
> +path => 'values',
> +method => 'GET',
> +description => 'Returns known notification metadata fields and their 
> known values',
> +permissions => {
> + check => ['or',
> + ['perm', '/mapping/notifications', ['Mapping.Modify']],
> + ['perm', '/mapping/notifications', ['Mapping.Audit']],
> + ],
> +},
> +protected => 1,
> +parameters => {
> + additionalProperties => 0,
> +},
> +returns => {
> + type => 'array',
> + items => {
> + type => 'object',
> + properties => {
> + 'value' => {
> + description => 'Notification metadata value known by the 
> system.',
> + type => 'string'
> + },
> + 'comment' => {
> + description => 'Additional comment for this value.',
> + type => 'string',
> + optional => 1,
> + },
> + 'field' => {
> + description => 'Field this value belongs to.',
> + type => 'string',
> + optional => 1,

Why is the field optional? All values are associated to a field below.

> + },
> + 'internal' => {
> + description => 'Set if "value" was generated by the system 
> and can'
> +. ' safely be used as base for translations.',
> + type => 'boolean',
> + optional => 1,
> + },

Hmm, not sure about this one. It's only used for the type field right?
Can't we just handle that specially in the UI?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs v3 2/2] qm: resource mapping: document `live-migration-capable` setting

2024-04-19 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 qm.adoc | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 3f4e59a..3ab5270 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1814,6 +1814,12 @@ Currently there are the following options:
   the mapping, the mediated device will be create on the first one where
   there are any available instances of the selected type.
 
+* `live-migration-capable` (PCI): This marks the device as being capable of
+  being live migrated between nodes. This requires driver and hardware support.
+  Only NVIDIA GPUs with recent kernel are known to support this. Note that
+  live migrating passed through devices is an experimental feature and may
+  not work or cause issues.
+
 
 Managing Virtual Machines with `qm`
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox 07/19] notify: api: add get_targets

2024-04-19 Thread Lukas Wagner



On  2024-04-19 10:34, Fiona Ebner wrote:
> Am 09.04.24 um 15:25 schrieb Lukas Wagner:
>> +/// Get a list of all notification targets.
>> +pub fn get_targets(config: ) -> Result, HttpError> {
>> +let mut targets = Vec::new();
>> +
>> +#[cfg(feature = "gotify")]
>> +for endpoint in gotify::get_endpoints(config)? {
>> +targets.push(Target {
>> +name: endpoint.name,
>> +origin: endpoint.origin.unwrap_or(Origin::UserCreated),
>> +endpoint_type: EndpointType::Gotify,
>> +disable: endpoint.disable,
>> +comment: endpoint.comment,
>> +})
> 
> Would it make sense to have into() functions for
> {Gotify,Sendmail,Smtp}Config -> Target ?

That would indeed be a bit nicer - but I'll do that in a follow-up, since this 
is
completely internal to proxmox-notify and is more 'style' than an actual issue 
:)

Thanks!

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 10/10] api: include not mapped resources for running vms in migrate preconditions

2024-04-19 Thread Dominik Csapak
so that we can show a proper warning in the migrate dialog and check it
in the bulk migrate precondition check

the unavailable_storages and should be the same as before, but
we now always return allowed_nodes too.

also add a note that we want to redesign the return values here, to make
* the api call simpler
* return better structured values

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Qemu.pm | 39 ++-
 1 file changed, 22 insertions(+), 17 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f95d8d95..94aa9942 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4450,18 +4450,20 @@ __PACKAGE__->register_method({
},
 },
 returns => {
+   # TODO 9.x: rework the api call to return more sensible structures
+   # e.g. a simple list of nodes with their blockers and/or notices to show
type => "object",
properties => {
running => { type => 'boolean' },
allowed_nodes => {
type => 'array',
optional => 1,
-   description => "List nodes allowed for offline migration, only 
passed if VM is offline"
+   description => "List nodes allowed for offline migration.",
},
not_allowed_nodes => {
type => 'object',
optional => 1,
-   description => "List not allowed nodes with additional 
informations, only passed if VM is offline"
+   description => "List not allowed nodes with additional 
information.",
},
local_disks => {
type => 'array',
@@ -4518,28 +4520,31 @@ __PACKAGE__->register_method({
 
# if vm is not running, return target nodes where local storage/mapped 
devices are available
# for offline migration
+   $res->{allowed_nodes} = [];
+   $res->{not_allowed_nodes} = {};
if (!$res->{running}) {
-   $res->{allowed_nodes} = [];
-   my $checked_nodes = 
PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
-   delete $checked_nodes->{$localnode};
-
-   foreach my $node (keys %$checked_nodes) {
-   my $missing_mappings = $missing_mappings_by_node->{$node};
-   if (scalar($missing_mappings->@*)) {
-   $checked_nodes->{$node}->{'unavailable-resources'} = 
$missing_mappings;
-   next;
-   }
+   $res->{not_allowed_nodes} = 
PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
+   delete $res->{not_allowed_nodes}->{$localnode};
+   }
 
-   if (!defined($checked_nodes->{$node}->{unavailable_storages})) {
-   push @{$res->{allowed_nodes}}, $node;
-   }
+   my $merged = { $res->{not_allowed_nodes}->%*, 
$missing_mappings_by_node->%* };
 
+   for my $node (keys $merged->%*) {
+   my $missing_mappings = $missing_mappings_by_node->{$node};
+   if (scalar($missing_mappings->@*)) {
+   $res->{not_allowed_nodes}->{$node}->{'unavailable-resources'} = 
$missing_mappings;
+   next;
+   }
+
+   if (!$res->{running}) {
+   if 
(!defined($res->{not_allowed_nodes}->{$node}->{unavailable_storages})) {
+   push $res->{allowed_nodes}->@*, $node;
+   }
}
-   $res->{not_allowed_nodes} = $checked_nodes;
}
 
my $local_disks = &$check_vm_disks_local($storecfg, $vmconf, $vmid);
-   $res->{local_disks} = [ values %$local_disks ];;
+   $res->{local_disks} = [ values %$local_disks ];
 
$res->{local_resources} = $local_resources;
$res->{'mapped-resources'} = [ keys $mapped_resources->%* ];
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 1/5] mapping: pci: include mdev in config checks

2024-04-19 Thread Dominik Csapak
by also providing the global config in assert_valid, and by also
adding the mdev config in the 'toCheck' object in the gui

For the gui, we extract the mdev property from the global entry, and add
it to the individual mapping entries, that way we can reuse the checking
logic of the other properties.

Signed-off-by: Dominik Csapak 
---
changes from v2:
* now correctly check the mdev property
  expected is from 'device' and configured one is from the record
 PVE/API2/Cluster/Mapping/PCI.pm | 2 +-
 www/manager6/dc/PCIMapView.js   | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Cluster/Mapping/PCI.pm b/PVE/API2/Cluster/Mapping/PCI.pm
index 64462d25..f85623bb 100644
--- a/PVE/API2/Cluster/Mapping/PCI.pm
+++ b/PVE/API2/Cluster/Mapping/PCI.pm
@@ -115,7 +115,7 @@ __PACKAGE__->register_method ({
};
}
for my $mapping ($mappings->@*) {
-   eval { PVE::Mapping::PCI::assert_valid($id, $mapping) };
+   eval { PVE::Mapping::PCI::assert_valid($id, $mapping, 
$entry) };
if (my $err = $@) {
push $entry->{checks}->@*, {
severity => 'error',
diff --git a/www/manager6/dc/PCIMapView.js b/www/manager6/dc/PCIMapView.js
index 80fe3c0f..3240590c 100644
--- a/www/manager6/dc/PCIMapView.js
+++ b/www/manager6/dc/PCIMapView.js
@@ -20,10 +20,15 @@ Ext.define('PVE.dc.PCIMapView', {
data.forEach((entry) => {
ids[entry.id] = entry;
});
+   let mdev;
me.getRootNode()?.cascade(function(rec) {
+   if (rec.data.type === 'entry') {
+   mdev = rec.data.mdev;
+   }
if (rec.data.node !== node || rec.data.type !== 'map') {
return;
}
+   rec.data.mdev = mdev;
 
let id = rec.data.path;
if (!id.match(/\.\d$/)) {
@@ -44,6 +49,7 @@ Ext.define('PVE.dc.PCIMapView', {
let toCheck = {
id: deviceId,
'subsystem-id': subId,
+   mdev: device.mdev,
iommugroup: device.iommugroup !== -1 ? device.iommugroup : 
undefined,
};
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 05/10] vm_stop_cleanup: add noerr parameter

2024-04-19 Thread Dominik Csapak
and set it on all current users

Signed-off-by: Dominik Csapak 
---
 PVE/CLI/qm.pm |  2 +-
 PVE/QemuServer.pm | 13 -
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b105830f..fbc590f5 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -933,7 +933,7 @@ __PACKAGE__->register_method({
 
if (!$clean || $guest) {
# vm was shutdown from inside the guest or crashed, doing api 
cleanup
-   PVE::QemuServer::vm_stop_cleanup($storecfg, $vmid, $conf, 0, 0);
+   PVE::QemuServer::vm_stop_cleanup($storecfg, $vmid, $conf, 0, 0, 
1);
}
PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-stop');
 
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 680c36f2..5ab52e7a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6165,7 +6165,7 @@ sub cleanup_pci_devices {
 }
 
 sub vm_stop_cleanup {
-my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes) = @_;
+my ($storecfg, $vmid, $conf, $keepActive, $apply_pending_changes, $noerr) 
= @_;
 
 eval {
 
@@ -6191,7 +6191,10 @@ sub vm_stop_cleanup {
 
vmconfig_apply_pending($vmid, $conf, $storecfg) if 
$apply_pending_changes;
 };
-warn $@ if $@; # avoid errors - just warn
+if (my $err = $@) {
+   die $err if !$noerr;
+   warn $err;
+}
 }
 
 # call only in locked context
@@ -6242,7 +6245,7 @@ sub _do_vm_stop {
die "VM quit/powerdown failed - got timeout\n";
}
} else {
-   vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1) if $conf;
+   vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1, 1) if 
$conf;
return;
}
 } else {
@@ -6273,7 +6276,7 @@ sub _do_vm_stop {
sleep 1;
 }
 
-vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1) if $conf;
+vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 1, 1) if $conf;
 }
 
 # Note: use $nocheck to skip tests if VM configuration file exists.
@@ -6288,7 +6291,7 @@ sub vm_stop {
my $pid = check_running($vmid, $nocheck, $migratedfrom);
kill 15, $pid if $pid;
my $conf = PVE::QemuConfig->load_config($vmid, $migratedfrom);
-   vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 0);
+   vm_stop_cleanup($storecfg, $vmid, $conf, $keepActive, 0, 1);
return;
 }
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 3/5] bulk migrate: include checks for live-migratable local resources

2024-04-19 Thread Dominik Csapak
those should be able to migrate even for online vms. If the mapping does
not exist on the target node, that will be caught further down anyway.

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Nodes.pm | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 882d7301..35e7047c 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -2255,9 +2255,18 @@ my $create_migrate_worker = sub {
$invalidConditions .= join(', ', map { $_->{volid} } 
@{$preconditions->{local_disks}});
}
 
+   # for a live migration all local_resources must be marked as 
live-migratable
if ($online && scalar($preconditions->{local_resources}->@*)) {
-   $invalidConditions .= "\n  Has local resources: ";
-   $invalidConditions .= join(', ', 
@{$preconditions->{local_resources}});
+   my $resource_not_live = [];
+   for my $resource ($preconditions->{local_resources}->@*) {
+   next if 
$preconditions->{'mapped-resource-info'}->{$resource}->{'live-migration'};
+   push $resource_not_live->@*, $resource;
+   }
+
+   if (scalar($resource_not_live->@*)) {
+   $invalidConditions .= "\n  Has local resources not marked as 
live migratable: ";
+   $invalidConditions .= join(', ', $resource_not_live->@*);
+   }
}
 
if (my $not_allowed_nodes = $preconditions->{not_allowed_nodes}) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox 09/19] notify: derive Deserialize/Serialize for Notification struct

2024-04-19 Thread Lukas Wagner



On  2024-04-19 10:45, Fiona Ebner wrote:
> Nit: I always like a quick sentence for who needs it for such changes.
> 
> Am 09.04.24 um 15:25 schrieb Lukas Wagner:
>> Signed-off-by: Lukas Wagner 
>> ---
>>  proxmox-notify/src/lib.rs | 10 +++---
>>  1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/proxmox-notify/src/lib.rs b/proxmox-notify/src/lib.rs
>> index 91c0b61..8d4dc63 100644
>> --- a/proxmox-notify/src/lib.rs
>> +++ b/proxmox-notify/src/lib.rs
>> @@ -159,11 +159,13 @@ pub trait Endpoint {
>>  fn disabled() -> bool;
>>  }
>>  
>> -#[derive(Debug, Clone)]
>> +#[derive(Debug, Clone, Serialize, Deserialize)]
>> +#[serde(rename_all = "kebab-case")]
>>  pub enum Content {
>>  /// Title and body will be rendered as a template
>>  Template {
>>  /// Name of the used template
>> +#[serde(rename = "template-name")]
> 
> So I guess this is here, because the rename_all above is not recursive?
> Should we put rename_all on top of Template and ForwardedMail (if that
> even works), so we are sure not to forget it for potential future fields?
> 

Yup, rename_all is not recursive. Added a rename_all for Template and 
ForwardedMail,
this makes more sense.

Thanks!

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 04/10] stop cleanup: remove unnecessary tpmstate cleanup

2024-04-19 Thread Dominik Csapak
tpmstate0 is already included in `get_vm_volumes`, and our only storage
plugin that has unmap_volume implemented is the RBDPlugin, where we call
unmap in `deactivate_volume`. So it's already ummapped by the
`deactivate_volumes` calls above.

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer.pm | 8 
 1 file changed, 8 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 28e630d3..680c36f2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6172,14 +6172,6 @@ sub vm_stop_cleanup {
if (!$keepActive) {
my $vollist = get_vm_volumes($conf);
PVE::Storage::deactivate_volumes($storecfg, $vollist);
-
-   if (my $tpmdrive = $conf->{tpmstate0}) {
-   my $tpm = parse_drive("tpmstate0", $tpmdrive);
-   my ($storeid, $volname) = 
PVE::Storage::parse_volume_id($tpm->{file}, 1);
-   if ($storeid) {
-   PVE::Storage::unmap_volume($storecfg, $tpm->{file});
-   }
-   }
}
 
foreach my $ext (qw(mon qmp pid vnc qga)) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 03/10] pci: mapping: check mdev config against hardware

2024-04-19 Thread Dominik Csapak
by giving the mapping config to assert_valid, not only the specific mapping

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer/PCI.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 7ff9cad7..6ba43ee8 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -436,7 +436,7 @@ sub parse_hostpci {
die "PCI device mapping not found for '$mapping'\n" if !$devices || 
!scalar($devices->@*);
 
for my $device ($devices->@*) {
-   eval { PVE::Mapping::PCI::assert_valid($mapping, $device) };
+   eval { PVE::Mapping::PCI::assert_valid($mapping, $device, 
$config->{ids}->{$mapping}) };
die "PCI device mapping invalid (hardware probably changed): $@\n" 
if $@;
push $alternatives->@*, [split(/;/, $device->{path})];
}
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common v3 4/4] mapping: remove find_on_current_node

2024-04-19 Thread Dominik Csapak
they only have one user each (where we can inline the implementation).
It's easy enough to recreate should we need to.

Signed-off-by: Dominik Csapak 
---
 src/PVE/Mapping/PCI.pm | 10 --
 src/PVE/Mapping/USB.pm |  9 -
 2 files changed, 19 deletions(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index b525c07..9c2b8b2 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -214,16 +214,6 @@ sub write_pci_config {
 cfs_write_file($FILENAME, $cfg);
 }
 
-sub find_on_current_node {
-my ($id) = @_;
-
-my $cfg = PVE::Mapping::PCI::config();
-my $node = PVE::INotify::nodename();
-
-# ignore errors
-return get_node_mapping($cfg, $id, $node);
-}
-
 sub get_node_mapping {
 my ($cfg, $id, $nodename) = @_;
 
diff --git a/src/PVE/Mapping/USB.pm b/src/PVE/Mapping/USB.pm
index 34bc3cb..6dd15c4 100644
--- a/src/PVE/Mapping/USB.pm
+++ b/src/PVE/Mapping/USB.pm
@@ -155,15 +155,6 @@ sub write_usb_config {
 cfs_write_file($FILENAME, $cfg);
 }
 
-sub find_on_current_node {
-my ($id) = @_;
-
-my $cfg = config();
-my $node = PVE::INotify::nodename();
-
-return get_node_mapping($cfg, $id, $node);
-}
-
 sub get_node_mapping {
 my ($cfg, $id, $nodename) = @_;
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common v3 2/4] mapping: pci: check the mdev configuration on the device too

2024-04-19 Thread Dominik Csapak
but that lives int he 'global' part of the mapping config, not in a
specific mapping. To check that, add it to the $configured_props from
there.

this requires all call sites to be adapted otherwise the check will
always fail for devices that are capable of mediated devices

Signed-off-by: Dominik Csapak 
---
changes from v2:
* adapt to changes of previous patch
 src/PVE/Mapping/PCI.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index ef1bd8d..b412c4d 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -131,7 +131,7 @@ sub options {
 
 # checks if the given config is valid for the current node
 sub assert_valid {
-my ($name, $mapping) = @_;
+my ($name, $mapping, $cfg) = @_;
 
 my @paths = split(';', $mapping->{path} // '');
 
@@ -151,6 +151,7 @@ sub assert_valid {
my $expected_props = {
id => "$info->{vendor}:$info->{device}",
iommugroup => $info->{iommugroup},
+   mdev => $info->{mdev},
};
 
if (defined($info->{'subsystem_vendor'}) && 
defined($info->{'subsystem_device'})) {
@@ -158,6 +159,8 @@ sub assert_valid {
}
 
my $configured_props = { $mapping->%{qw(id iommugroup subsystem-id)} };
+   # check mdev from globabl mapping config
+   $configured_props->{mdev} = $cfg->{mdev};
 
my $merged = { %$expected_props, %$configured_props }; # just for the 
keys
for my $prop (sort keys %$merged) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 5/5] fix #5175: ui: allow configuring and live migration of mapped pci resources

2024-04-19 Thread Dominik Csapak
if the hardware/driver is capable, the admin can now mark a pci device
as 'live-migration-capable', which then tries enabling live migration
for such devices.

mark it as experimental when configuring and in the migrate window

Signed-off-by: Dominik Csapak 
---
 www/manager6/window/Migrate.js| 25 -
 www/manager6/window/PCIMapEdit.js | 12 
 2 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index c553a1ee..272ab022 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -244,10 +244,10 @@ Ext.define('PVE.window.Migrate', {
}
 
let blockingResources = [];
-   let mappedResources = migrateStats['mapped-resources'] ?? [];
+   let mappedResources = migrateStats['mapped-resource-info'] ?? {};
 
for (const res of migrateStats.local_resources) {
-   if (mappedResources.indexOf(res) === -1) {
+   if (!mappedResources[res]) {
blockingResources.push(res);
}
}
@@ -271,14 +271,29 @@ Ext.define('PVE.window.Migrate', {
}
}
 
-   if (mappedResources && mappedResources.length) {
-   if (vm.get('running')) {
+   if (mappedResources && vm.get('running')) {
+   let allowed = [];
+   let notAllowed = [];
+   for (const [key, resource] of Object.entries(mappedResources)) {
+   if (resource['live-migration']) {
+   allowed.push(key);
+   } else {
+   notAllowed.push(key);
+   }
+   }
+   if (notAllowed.length > 0) {
migration.possible = false;
migration.preconditions.push({
text: Ext.String.format('Can\'t migrate running VM with 
mapped resources: {0}',
-   mappedResources.join(', ')),
+   notAllowed.join(', ')),
severity: 'error',
});
+   } else if (allowed.length > 0) {
+   migration.preconditions.push({
+   text: Ext.String.format('Live-migrating running VM with 
mapped resources (Experimental): {0}',
+   allowed.join(', ')),
+   severity: 'warning',
+   });
}
}
 
diff --git a/www/manager6/window/PCIMapEdit.js 
b/www/manager6/window/PCIMapEdit.js
index d43f04eb..731269a0 100644
--- a/www/manager6/window/PCIMapEdit.js
+++ b/www/manager6/window/PCIMapEdit.js
@@ -242,6 +242,18 @@ Ext.define('PVE.window.PCIMapEditWindow', {
disabled: '{hideComment}',
},
},
+   {
+   xtype: 'proxmoxcheckbox',
+   fieldLabel: gettext('Live Migration Capable'),
+   labelWidth: 200,
+   boxLabel: ` ${gettext('Experimental')}`,
+   reference: 'live-migration-capable',
+   name: 'live-migration-capable',
+   cbind: {
+   deleteEmpty: '{!isCreate}',
+   disabled: '{hideComment}',
+   },
+   },
],
 
columnB: [
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common v3 3/4] mapping: pci: add 'live-migration-capable' flag to mappings

2024-04-19 Thread Dominik Csapak
so that we can decide in qemu-server to allow live-migration.
The driver and QEMU must be capable of that, and it's the
admin's responsibility to know and configure that

Mark the option as experimental in the description.

Signed-off-by: Dominik Csapak 
---
 src/PVE/Mapping/PCI.pm | 8 
 1 file changed, 8 insertions(+)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index b412c4d..b525c07 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -105,6 +105,13 @@ my $defaultData = {
optional => 1,
default => 0,
},
+   'live-migration-capable' => {
+   description => "Marks the device(s) as being able to be 
live-migrated (Experimental)."
+   ." This needs hardware and driver support to work.",
+   type => 'boolean',
+   optional => 1,
+   default => 0,
+   },
map => {
type => 'array',
description => 'A list of maps for the cluster nodes.',
@@ -125,6 +132,7 @@ sub options {
 return {
description => { optional => 1 },
mdev => { optional => 1 },
+   'live-migration-capable' => { optional => 1 },
map => {},
 };
 }
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 09/10] api: enable live migration for marked mapped pci devices

2024-04-19 Thread Dominik Csapak
They have to be marked as 'live-migration-capable' in the mapping
config, and the driver and qemu must support it.

For the gui checks, we now return the whole object of the mapped
resources, which includes info like the name and if it's marked as
live-migration capable. (while deprecating the old 'mapped-resource'
return value, since that returns strictly less information)

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Qemu.pm   |  7 ++-
 PVE/QemuMigrate.pm | 13 +
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index f2fa345d..f95d8d95 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4473,7 +4473,11 @@ __PACKAGE__->register_method({
},
'mapped-resources' => {
type => 'array',
-   description => "List of mapped resources e.g. pci, usb"
+   description => "List of mapped resources e.g. pci, usb. 
Deprecated, use 'mapped-resource-info' instead."
+   },
+   'mapped-resource-info' => {
+   type => 'object',
+   description => "Object of mapped resources with additional 
information such if they're live migratable.",
},
},
 },
@@ -4539,6 +4543,7 @@ __PACKAGE__->register_method({
 
$res->{local_resources} = $local_resources;
$res->{'mapped-resources'} = [ keys $mapped_resources->%* ];
+   $res->{'mapped-resource-info'} = $mapped_resources;
 
return $res;
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a46eb2a3..89caefc7 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -247,11 +247,16 @@ sub prepare {
 
 if (scalar(keys $mapped_res->%*)) {
my $missing_mappings = $missing_mappings_by_node->{$self->{node}};
-   my $mapped_text = join(", ", keys $mapped_res->%*);
-   if ($running) {
-   die "can't migrate running VM which uses mapped devices: 
$mapped_text\n";
-   } elsif (scalar($missing_mappings->@*)) {
+   my $missing_live_mappings = [];
+   for my $key (keys $mapped_res->%*) {
+   my $res = $mapped_res->{$key};
+   my $name = "$key:$res->{name}";
+   push $missing_live_mappings->@*, $name if !$res->{'live-migration'};
+   }
+   if (scalar($missing_mappings->@*)) {
die "can't migrate to '$self->{node}': missing mapped devices " . 
join(", ", $missing_mappings->@*) . "\n";
+   } elsif ($running && scalar($missing_live_mappings->@*)) {
+   die "can't live migrate running VM which uses following mapped 
devices: " . join(", ", $missing_live_mappings->@*) . "\n";
} else {
$self->log('info', "migrating VM which uses mapped local devices");
}
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 4/5] ui: adapt migration window to precondition api change

2024-04-19 Thread Dominik Csapak
we now return the 'allowed_nodes'/'not_allowed_nodes' also if the vm is
running, when it has mapped resources. So do that checks independently
so that the user has instant feedback where those resources exist.

Signed-off-by: Dominik Csapak 
---
 www/manager6/window/Migrate.js | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/www/manager6/window/Migrate.js b/www/manager6/window/Migrate.js
index 5473821b..c553a1ee 100644
--- a/www/manager6/window/Migrate.js
+++ b/www/manager6/window/Migrate.js
@@ -104,7 +104,7 @@ Ext.define('PVE.window.Migrate', {
onTargetChange: function(nodeSelector) {
// Always display the storages of the currently seleceted migration 
target

this.lookup('pveDiskStorageSelector').setNodename(nodeSelector.value);
-   this.checkMigratePreconditions();
+   this.checkMigratePreconditions(true);
},
 
startMigration: function() {
@@ -214,12 +214,12 @@ Ext.define('PVE.window.Migrate', {
migration.possible = true;
}
migration.preconditions = [];
+   let target = me.lookup('pveNodeSelector').value;
+   let disallowed = migrateStats.not_allowed_nodes?.[target] ?? {};
 
-   if (migrateStats.allowed_nodes) {
+   if (migrateStats.allowed_nodes && !vm.get('running')) {
migration.allowedNodes = migrateStats.allowed_nodes;
-   let target = me.lookup('pveNodeSelector').value;
if (target.length && 
!migrateStats.allowed_nodes.includes(target)) {
-   let disallowed = migrateStats.not_allowed_nodes[target] ?? 
{};
if (disallowed.unavailable_storages !== undefined) {
let missingStorages = 
disallowed.unavailable_storages.join(', ');
 
@@ -230,17 +230,17 @@ Ext.define('PVE.window.Migrate', {
severity: 'error',
});
}
+   }
+   }
 
-   if (disallowed['unavailable-resources'] !== undefined) {
-   let unavailableResources = 
disallowed['unavailable-resources'].join(', ');
+   if (disallowed['unavailable-resources'] !== undefined) {
+   let unavailableResources = 
disallowed['unavailable-resources'].join(', ');
 
-   migration.possible = false;
-   migration.preconditions.push({
-   text: 'Mapped Resources (' + unavailableResources + 
') not available on selected target. ',
-   severity: 'error',
-   });
-   }
-   }
+   migration.possible = false;
+   migration.preconditions.push({
+   text: 'Mapped Resources (' + unavailableResources + ') not 
available on selected target. ',
+   severity: 'error',
+   });
}
 
let blockingResources = [];
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v3 2/5] bulk migrate: improve precondition checks

2024-04-19 Thread Dominik Csapak
this now takes into account the 'not_allowed_nodes' hash we get from the
api call. With that, we can now limit the 'local_resources' check for
online vms only, as for offline guests, the 'unavailable-resources' hash
already includes mapped devices that don't exist on the target node.

This now also includes unavailable storages on target nodes.

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Nodes.pm | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index fb4fd4d6..882d7301 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -2255,11 +2255,23 @@ my $create_migrate_worker = sub {
$invalidConditions .= join(', ', map { $_->{volid} } 
@{$preconditions->{local_disks}});
}
 
-   if (@{$preconditions->{local_resources}}) {
+   if ($online && scalar($preconditions->{local_resources}->@*)) {
$invalidConditions .= "\n  Has local resources: ";
$invalidConditions .= join(', ', 
@{$preconditions->{local_resources}});
}
 
+   if (my $not_allowed_nodes = $preconditions->{not_allowed_nodes}) {
+   if (my $unavail_storages = 
$not_allowed_nodes->{$target}->{unavailable_storages}) {
+   $invalidConditions .= "\n  Has unavailable storages: ";
+   $invalidConditions .= join(', ', $unavail_storages->@*);
+   }
+
+   if (my $unavail_resources = 
$not_allowed_nodes->{$target}->{'unavailable-resources'}) {
+   $invalidConditions .= "\n  Has unavailable resources ";
+   $invalidConditions .= join(', ', $unavail_resources->@*);
+   }
+   }
+
if ($invalidConditions && $invalidConditions ne '') {
print STDERR "skip VM $vmid - precondition check failed:";
die "$invalidConditions\n";
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 08/10] check_local_resources: add more info per mapped device and return as hash

2024-04-19 Thread Dominik Csapak
such as the mapping name and if it's marked for live-migration (pci only)

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Qemu.pm   |  2 +-
 PVE/QemuMigrate.pm |  7 ---
 PVE/QemuServer.pm  | 17 ++---
 3 files changed, 15 insertions(+), 11 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 2a349c8c..f2fa345d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4538,7 +4538,7 @@ __PACKAGE__->register_method({
$res->{local_disks} = [ values %$local_disks ];;
 
$res->{local_resources} = $local_resources;
-   $res->{'mapped-resources'} = $mapped_resources;
+   $res->{'mapped-resources'} = [ keys $mapped_resources->%* ];
 
return $res;
 
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 381022f5..a46eb2a3 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -233,7 +233,7 @@ sub prepare {
 my ($loc_res, $mapped_res, $missing_mappings_by_node) = 
PVE::QemuServer::check_local_resources($conf, 1);
 my $blocking_resources = [];
 for my $res ($loc_res->@*) {
-   if (!grep($res, $mapped_res->@*)) {
+   if (!defined($mapped_res->{$res})) {
push $blocking_resources->@*, $res;
}
 }
@@ -245,10 +245,11 @@ sub prepare {
}
 }
 
-if (scalar($mapped_res->@*)) {
+if (scalar(keys $mapped_res->%*)) {
my $missing_mappings = $missing_mappings_by_node->{$self->{node}};
+   my $mapped_text = join(", ", keys $mapped_res->%*);
if ($running) {
-   die "can't migrate running VM which uses mapped devices: " . 
join(", ", $mapped_res->@*) . "\n";
+   die "can't migrate running VM which uses mapped devices: 
$mapped_text\n";
} elsif (scalar($missing_mappings->@*)) {
die "can't migrate to '$self->{node}': missing mapped devices " . 
join(", ", $missing_mappings->@*) . "\n";
} else {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5ab52e7a..dcb3b3be 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2559,7 +2559,7 @@ sub check_local_resources {
 my ($conf, $noerr) = @_;
 
 my @loc_res = ();
-my $mapped_res = [];
+my $mapped_res = {};
 
 my $nodelist = PVE::Cluster::get_nodelist();
 my $pci_map = PVE::Mapping::PCI::config();
@@ -2591,16 +2591,19 @@ sub check_local_resources {
if ($k =~ m/^usb/) {
my $entry = parse_property_string('pve-qm-usb', $conf->{$k});
next if $entry->{host} && $entry->{host} =~ m/^spice$/i;
-   if ($entry->{mapping}) {
-   $add_missing_mapping->('usb', $k, $entry->{mapping});
-   push @$mapped_res, $k;
+   if (my $name = $entry->{mapping}) {
+   $add_missing_mapping->('usb', $k, $name);
+   $mapped_res->{$k} = { name => $name };
}
}
if ($k =~ m/^hostpci/) {
my $entry = parse_property_string('pve-qm-hostpci', $conf->{$k});
-   if ($entry->{mapping}) {
-   $add_missing_mapping->('pci', $k, $entry->{mapping});
-   push @$mapped_res, $k;
+   if (my $name = $entry->{mapping}) {
+   $add_missing_mapping->('pci', $k, $name);
+   my $mapped_device = { name => $name };
+   $mapped_device->{'live-migration'} = 1
+   if $pci_map->{ids}->{$name}->{'live-migration-capable'};
+   $mapped_res->{$k} = $mapped_device;
}
}
# sockets are safe: they will recreated be on the target side 
post-migrate
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs v3 1/2] qm: resource mapping: add description for `mdev` option

2024-04-19 Thread Dominik Csapak
in a new section about additional options

Signed-off-by: Dominik Csapak 
---
 qm.adoc | 13 +
 1 file changed, 13 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 42c26db..3f4e59a 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1802,6 +1802,19 @@ To create mappings `Mapping.Modify` on 
`/mapping//` is necessary
 To use these mappings, `Mapping.Use` on `/mapping//` is necessary
 (in addition to the normal guest privileges to edit the configuration).
 
+Additional Options
+^^
+
+There are additional options when defining a cluster wide resource mapping.
+Currently there are the following options:
+
+* `mdev` (PCI): This marks the PCI device as being capable of providing
+  `mediated devices`. When this is enabled, you can select a type when
+  configuring it on the guest. If multiple PCI devices are selected for
+  the mapping, the mediated device will be create on the first one where
+  there are any available instances of the selected type.
+
+
 Managing Virtual Machines with `qm`
 
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 06/10] migrate: call vm_stop_cleanup after stopping in phase3_cleanup

2024-04-19 Thread Dominik Csapak
we currently only call deactivate_volumes, but we actually want to call
the whole vm_stop_cleanup, since that is not invoked by the vm_stop
above (we cannot parse the config anymore) and might do other cleanups
we also want to do (like mdev cleanup).

For this to work properly we have to clone the original config at the
beginning, since we might modify the volids.

Signed-off-by: Dominik Csapak 
---
 PVE/QemuMigrate.pm   | 12 ++--
 test/MigrationTest/Shared.pm |  3 +++
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 8d9b35ae..381022f5 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use IO::File;
 use IPC::Open2;
+use Storable qw(dclone);
 use Time::HiRes qw( usleep );
 
 use PVE::Cluster;
@@ -1455,7 +1456,8 @@ sub phase3_cleanup {
 
 my $tunnel = $self->{tunnel};
 
-my $sourcevollist = PVE::QemuServer::get_vm_volumes($conf);
+# we'll need an unmodified copy of the config later for the cleanup
+my $oldconf = dclone($conf);
 
 if ($self->{volume_map} && !$self->{opts}->{remote}) {
my $target_drives = $self->{target_drive};
@@ -1586,12 +1588,10 @@ sub phase3_cleanup {
$self->{errors} = 1;
 }
 
-# always deactivate volumes - avoid lvm LVs to be active on several nodes
-eval {
-   PVE::Storage::deactivate_volumes($self->{storecfg}, $sourcevollist);
-};
+# stop with nocheck does not do a cleanup, so do it here with the original 
config
+eval { PVE::QemuServer::vm_stop_cleanup($self->{storecfg}, $vmid, 
$oldconf) };
 if (my $err = $@) {
-   $self->log('err', $err);
+   $self->log('err', "cleanup for vm failed - $err");
$self->{errors} = 1;
 }
 
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d1..2347e60a 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -130,6 +130,9 @@ $qemu_server_module->mock(
 clear_reboot_request => sub {
return 1;
 },
+vm_stop_cleanup => sub {
+   return 1;
+},
 get_efivars_size => sub {
 return 128 * 1024;
 },
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 07/10] pci: set 'enable-migration' to on for live-migration marked mapped devices

2024-04-19 Thread Dominik Csapak
the default is 'auto', but for those which are marked as capable for
live migration, we want to explicitly enable that, so we get an early
error on start if the driver does not support that.

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer/PCI.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 6ba43ee8..df2ea3eb 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -435,8 +435,11 @@ sub parse_hostpci {
my $devices = PVE::Mapping::PCI::get_node_mapping($config, $mapping, 
$node);
die "PCI device mapping not found for '$mapping'\n" if !$devices || 
!scalar($devices->@*);
 
+   my $cfg = $config->{ids}->{$mapping};
+   $res->{'live-migration-capable'} = 1 if 
$cfg->{'live-migration-capable'};
+
for my $device ($devices->@*) {
-   eval { PVE::Mapping::PCI::assert_valid($mapping, $device, 
$config->{ids}->{$mapping}) };
+   eval { PVE::Mapping::PCI::assert_valid($mapping, $device, $cfg) };
die "PCI device mapping invalid (hardware probably changed): $@\n" 
if $@;
push $alternatives->@*, [split(/;/, $device->{path})];
}
@@ -635,6 +638,10 @@ sub print_hostpci_devices {
$devicestr .= ",host=$pcidevice->{id}";
}
 
+   if ($d->{'live-migration-capable'}) {
+   $devicestr .= ",enable-migration=on"
+   }
+
my $mf_addr = $multifunction ? ".$j" : '';
$devicestr .= ",id=${id}${mf_addr}${pciaddr}${mf_addr}";
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 01/10] usb: mapping: move implementation of find_on_current_node here

2024-04-19 Thread Dominik Csapak
this was the only user, and it's easy enough

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer/USB.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/USB.pm b/PVE/QemuServer/USB.pm
index 49957444..ecd0361d 100644
--- a/PVE/QemuServer/USB.pm
+++ b/PVE/QemuServer/USB.pm
@@ -5,6 +5,7 @@ use warnings;
 use PVE::QemuServer::PCI qw(print_pci_addr);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Helpers qw(min_version windows_version);
+use PVE::INotify;
 use PVE::JSONSchema;
 use PVE::Mapping::USB;
 use base 'Exporter';
@@ -91,7 +92,9 @@ sub parse_usb_device {
$res->{spice} = 1;
}
 } elsif (defined($mapping)) {
-   my $devices = PVE::Mapping::USB::find_on_current_node($mapping);
+   my $config = PVE::Mapping::USB::config();
+   my $node = PVE::INotify::nodename();
+   my $devices = PVE::Mapping::USB::get_node_mapping($config, $mapping, 
$node);
die "USB device mapping not found for '$mapping'\n" if !$devices || 
!scalar($devices->@*);
die "More than one USB mapping per host not supported\n" if 
scalar($devices->@*) > 1;
eval {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common/qemu-server/manager/docs v3 0/4] implement experimental vgpu live migration

2024-04-19 Thread Dominik Csapak
and some useful cleanups

Resending even there was not much feedback, because i worked in some
minor fixes/changes in the meantime.

A user tested the previous patch series and only found one issue with
the ui, see the comments on bug #5175

https://bugzilla.proxmox.com/show_bug.cgi?id=5175


This is implemented for mapped resources. This requires driver and
hardware support, but aside from nvidia vgpus there don't seem to be
many drivers (if any) that do support that.

qemu already supports that for vfio-pci devices, so nothing to be
done there besides actively enabling it.

Since we currently can't properly test it here and very much depends on
hardware/driver support, mark it as experimental everywhere (docs/api/gui).
(though i tested the live-migration part manually here by using
"exec:cat > /tmp/test" for the migration target, and "exec: cat
/tmp/test" as the 'incoming' parameter for a new vm start, which worked ;) )

i opted for marking them migratable at the mapping level, but we could
theoretically also put it in the hostpciX config instead.
(though imho it fits better in the cluster-wide resource mapping config)

also the naming/texts could probably be improved, but i think
'live-migration-capable' is very descriptive and i didn't want to
use an overly short name for it (which can be confusing, see the
'shared' flag for storages)

guest-common 2/4 semi-breaks pve-manager without pve-manager 1/5
and qemu-server without 3/10
(would always fail for devices capable of mediated devices)

guest-common 1,2; qemu-server 1-6; pve-manager 1,2
are preparations/cleanups mostly and could be applied independently

changes from v2:
* rebased on master
* rework the rework of the properties check (pve-guest-common 1/4)
* properly check mdev in the gui (pve-manager 1/5)

remaining patches are unchaned

pve-guest-common:

Dominik Csapak (4):
  mapping: pci: rework properties check
  mapping: pci: check the mdev configuration on the device too
  mapping: pci: add 'live-migration-capable' flag to mappings
  mapping: remove find_on_current_node

 src/PVE/Mapping/PCI.pm | 55 +++---
 src/PVE/Mapping/USB.pm |  9 ---
 2 files changed, 30 insertions(+), 34 deletions(-)

qemu-server:

Dominik Csapak (10):
  usb: mapping: move implementation of find_on_current_node here
  pci: mapping: move implementation of find_on_current_node here
  pci: mapping: check mdev config against hardware
  stop cleanup: remove unnecessary tpmstate cleanup
  vm_stop_cleanup: add noerr parameter
  migrate: call vm_stop_cleanup after stopping in phase3_cleanup
  pci: set 'enable-migration' to on for live-migration marked mapped
devices
  check_local_resources: add more info per mapped device and return as
hash
  api: enable live migration for marked mapped pci devices
  api: include not mapped resources for running vms in migrate
preconditions

 PVE/API2/Qemu.pm | 48 ++--
 PVE/CLI/qm.pm|  2 +-
 PVE/QemuMigrate.pm   | 28 -
 PVE/QemuServer.pm| 38 ++--
 PVE/QemuServer/PCI.pm| 14 +--
 PVE/QemuServer/USB.pm|  5 +++-
 test/MigrationTest/Shared.pm |  3 +++
 7 files changed, 84 insertions(+), 54 deletions(-)

pve-manager:

Dominik Csapak (5):
  mapping: pci: include mdev in config checks
  bulk migrate: improve precondition checks
  bulk migrate: include checks for live-migratable local resources
  ui: adapt migration window to precondition api change
  fix #5175: ui: allow configuring and live migration of mapped pci
resources

 PVE/API2/Cluster/Mapping/PCI.pm   |  2 +-
 PVE/API2/Nodes.pm | 27 ++--
 www/manager6/dc/PCIMapView.js |  6 
 www/manager6/window/Migrate.js| 51 ---
 www/manager6/window/PCIMapEdit.js | 12 
 5 files changed, 76 insertions(+), 22 deletions(-)

pve-docs:

Dominik Csapak (2):
  qm: resource mapping: add description for `mdev` option
  qm: resource mapping: document `live-migration-capable` setting

 qm.adoc | 19 +++
 1 file changed, 19 insertions(+)

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v3 02/10] pci: mapping: move implementation of find_on_current_node here

2024-04-19 Thread Dominik Csapak
this was the only user, and it's easy enough

Signed-off-by: Dominik Csapak 
---
 PVE/QemuServer/PCI.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm
index 1673041b..7ff9cad7 100644
--- a/PVE/QemuServer/PCI.pm
+++ b/PVE/QemuServer/PCI.pm
@@ -3,6 +3,7 @@ package PVE::QemuServer::PCI;
 use warnings;
 use strict;
 
+use PVE::INotify;
 use PVE::JSONSchema;
 use PVE::Mapping::PCI;
 use PVE::SysFSTools;
@@ -429,7 +430,9 @@ sub parse_hostpci {
 
 if ($mapping) {
# we have no ordinary pci id, must be a mapping
-   my $devices = PVE::Mapping::PCI::find_on_current_node($mapping);
+   my $config = PVE::Mapping::PCI::config();
+   my $node = PVE::INotify::nodename();
+   my $devices = PVE::Mapping::PCI::get_node_mapping($config, $mapping, 
$node);
die "PCI device mapping not found for '$mapping'\n" if !$devices || 
!scalar($devices->@*);
 
for my $device ($devices->@*) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH guest-common v3 1/4] mapping: pci: rework properties check

2024-04-19 Thread Dominik Csapak
rename '$cfg' to '$mapping', 'correct' to 'expected'
reword the error messages
also check keys from the configured props not only the expected ones

previously we only checked the keys from the 'correct_props' hash
but that was unintended. We now check the keys from both, but extract
the relevant properties first.

Signed-off-by: Dominik Csapak 
---
changes from v2:
* don't refactor the properties check out
* use properties from both configured and expected hashes
* extract the relevant configured properties from the mapping
  instead of using all (previously we only used the expected ones
  by accident)
 src/PVE/Mapping/PCI.pm | 34 +++---
 1 file changed, 19 insertions(+), 15 deletions(-)

diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
index 725e106..ef1bd8d 100644
--- a/src/PVE/Mapping/PCI.pm
+++ b/src/PVE/Mapping/PCI.pm
@@ -131,9 +131,9 @@ sub options {
 
 # checks if the given config is valid for the current node
 sub assert_valid {
-my ($name, $cfg) = @_;
+my ($name, $mapping) = @_;
 
-my @paths = split(';', $cfg->{path} // '');
+my @paths = split(';', $mapping->{path} // '');
 
 my $idx = 0;
 for my $path (@paths) {
@@ -148,32 +148,36 @@ sub assert_valid {
my $info = PVE::SysFSTools::pci_device_info($path, 1);
die "pci device '$path' not found\n" if !defined($info);
 
-   my $correct_props = {
+   my $expected_props = {
id => "$info->{vendor}:$info->{device}",
iommugroup => $info->{iommugroup},
};
 
if (defined($info->{'subsystem_vendor'}) && 
defined($info->{'subsystem_device'})) {
-   $correct_props->{'subsystem-id'} = 
"$info->{'subsystem_vendor'}:$info->{'subsystem_device'}";
+   $expected_props->{'subsystem-id'} = 
"$info->{'subsystem_vendor'}:$info->{'subsystem_device'}";
}
 
-   for my $prop (sort keys %$correct_props) {
+   my $configured_props = { $mapping->%{qw(id iommugroup subsystem-id)} };
+
+   my $merged = { %$expected_props, %$configured_props }; # just for the 
keys
+   for my $prop (sort keys %$merged) {
next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on 
the first device
 
-   next if !defined($correct_props->{$prop}) && 
!defined($cfg->{$prop});
-   die "no '$prop' for device '$path'\n"
-   if defined($correct_props->{$prop}) && !defined($cfg->{$prop});
-   die "'$prop' configured but should not be\n"
-   if !defined($correct_props->{$prop}) && defined($cfg->{$prop});
+   next if !defined($expected_props->{$prop}) && 
!defined($configured_props->{$prop});
+   die "missing expected property '$prop' for device '$path'\n"
+   if defined($expected_props->{$prop}) && 
!defined($configured_props->{$prop});
+   die "unexpected property '$prop' configured for device '$path'\n"
+   if !defined($expected_props->{$prop}) && 
defined($configured_props->{$prop});
 
-   my $correct_prop = $correct_props->{$prop};
-   $correct_prop =~ s/0x//g;
-   my $configured_prop = $cfg->{$prop};
+   my $expected_prop = $expected_props->{$prop};
+   $expected_prop =~ s/0x//g;
+   my $configured_prop = $configured_props->{$prop};
$configured_prop =~ s/0x//g;
 
-   die "'$prop' does not match for '$name' ($correct_prop != 
$configured_prop)\n"
-   if $correct_prop ne $configured_prop;
+   die "'$prop' does not match for '$name' ($expected_prop != 
$configured_prop)\n"
+   if $expected_prop ne $configured_prop;
}
+
$idx++;
 }
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 04/16] ui: dc: backup: allow to set custom job id in advanced settings

2024-04-19 Thread Lukas Wagner



On  2024-04-19 12:31, Fiona Ebner wrote:
> Am 15.04.24 um 10:26 schrieb Lukas Wagner:
>> This might be useful if somebody wants to match on the new
>> 'backup-job' field in a notification match rule.
>>
>> Signed-off-by: Lukas Wagner 
>> ---
>>  www/manager6/Utils.js |  4 
>>  www/manager6/dc/Backup.js | 11 +++
>>  2 files changed, 15 insertions(+)
>>
>> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
>> index 287d651a..d4b5f3e6 100644
>> --- a/www/manager6/Utils.js
>> +++ b/www/manager6/Utils.js
>> @@ -1952,6 +1952,10 @@ Ext.define('PVE.Utils', {
>>  singleton: true,
>>  constructor: function() {
>>  var me = this;
>> +
>> +// Same regex as 'pve-configid
>> +me.CONFIGID_RE = /^[A-Za-z][A-Za-z0-9_-]+$/;
> 
> This already exists (with nice verification errors), by using
> vtype: 'ConfigId'
> for the field. It's defined in widget-toolkit, Toolkit.js
> 

Fixed, thanks!

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 05/16] api: replication: add 'replication-job' to notification metadata

2024-04-19 Thread Lukas Wagner



On  2024-04-19 14:02, Fiona Ebner wrote:
> Am 15.04.24 um 10:26 schrieb Lukas Wagner:
>> This allows users to create notification match rules for specific
>> replication jobs, if they so desire.
>>
>> Signed-off-by: Lukas Wagner 
>> ---
>>  PVE/API2/Replication.pm | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/PVE/API2/Replication.pm b/PVE/API2/Replication.pm
>> index 0dc944c9..703640f5 100644
>> --- a/PVE/API2/Replication.pm
>> +++ b/PVE/API2/Replication.pm
>> @@ -140,8 +140,8 @@ my sub _handle_job_err {
>>  };
>>  
>>  my $metadata_fields = {
>> -# TODO: Add job-id?
>>  type => "replication",
>> +"replication-job" => $job->{id},
>>  };
>>  
>>  eval {
> 
> Not sure if we should use "replication-job" and "backup-job" for the
> metadata entries rather then just "job-id". The type is already
> something that can be matched, why re-do it implicitly with the field
> name? E.g. I want to see all jobs with -fiona- on the system, now I'd
> have to create a matcher rule for each job type.

Had a 'job-id' field at first, but I *think* (can't be too sure after more than 
4 months of not working on this) the reason why I changed it to this approach
were the replication job IDs, which look like '100-0' or similar.
Giving them and other job IDs a unique field made it a bit easier to
understand what is what when creating a matcher in the improved UI.

For instance, if you just have 'job-id', the pre-filled combo box in the 
match-field edit UI might contain these values

  - backup-gaasdgh7asdfg
  - 100-0
  - 101-0

IMO it's a bit hard to understand that the last two are replication jobs. The 
separate
job fields make this easier.


-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 05/16] api: replication: add 'replication-job' to notification metadata

2024-04-19 Thread Fiona Ebner
Am 15.04.24 um 10:26 schrieb Lukas Wagner:
> This allows users to create notification match rules for specific
> replication jobs, if they so desire.
> 
> Signed-off-by: Lukas Wagner 
> ---
>  PVE/API2/Replication.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Replication.pm b/PVE/API2/Replication.pm
> index 0dc944c9..703640f5 100644
> --- a/PVE/API2/Replication.pm
> +++ b/PVE/API2/Replication.pm
> @@ -140,8 +140,8 @@ my sub _handle_job_err {
>  };
>  
>  my $metadata_fields = {
> - # TODO: Add job-id?
>   type => "replication",
> + "replication-job" => $job->{id},
>  };
>  
>  eval {

Not sure if we should use "replication-job" and "backup-job" for the
metadata entries rather then just "job-id". The type is already
something that can be matched, why re-do it implicitly with the field
name? E.g. I want to see all jobs with -fiona- on the system, now I'd
have to create a matcher rule for each job type.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 02/16] api: jobs: vzdump: pass job 'id' parameter

2024-04-19 Thread Fiona Ebner
Am 15.04.24 um 10:26 schrieb Lukas Wagner:
> This allows us to access us the backup job id in the send_notification
> function, where we can set it as metadata for the notification.
> 
> Signed-off-by: Lukas Wagner 
> ---
>  PVE/API2/VZDump.pm | 8 
>  PVE/Jobs/VZDump.pm | 4 +++-
>  PVE/VZDump.pm  | 6 +++---
>  3 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/API2/VZDump.pm b/PVE/API2/VZDump.pm
> index f66fc740..6bc0b792 100644
> --- a/PVE/API2/VZDump.pm
> +++ b/PVE/API2/VZDump.pm
> @@ -52,6 +52,14 @@ __PACKAGE__->register_method ({
>  parameters => {
>   additionalProperties => 0,
>   properties => PVE::VZDump::Common::json_config_properties({
> + id => {
It's very generic and it's not the ID of the invocation, so maybe 'job-id'?

Hmm, ideally it would be internal-use only. Like this we have to
remember never to fully "trust" this 'id', because a user can lie when
issuing a vzdump API call.

> + description => "The ID of the backup job. If set, the 
> 'backup-job' metadata field"
> + . " of the backup notification will be set to this value.",
> + type => 'string',
> + format => 'pve-configid',
> + maxLength => 64,

Hmm, where does that limit come from? Seems like we do not explicitly
limit it yet upon job creation, which we should too! For creation, it
will fail for long lengths with

500 unable to open file
'/var/lib/pve-manager/jobs/vzdump-backup-a01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789.json.tmp.200643'
- File name too long

Seems like the limit is 256, so a bit shorter for backup job IDs, but
IDs longer than 64 can exist right now. We can either choose 64, which
might break it for a (small) number of users or choose a limit closer to
256.

> + optional => 1,
> + },
>   stdout => {
>   type => 'boolean',
>   description => "Write tar to stdout, not to a file.",


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH many 00/19] notifications: move template strings to template files; PBS preparations

2024-04-19 Thread Lukas Wagner


On  2024-04-19 13:22, Fabian Grünbichler wrote:
> On April 19, 2024 12:09 pm, Fiona Ebner wrote:
>> Am 09.04.24 um 15:25 schrieb Lukas Wagner:
>>> Bumps/dependencies:
>>>   - proxmox_notify
>>>   - libproxmox-rs-perl/libpve-rs-perl (needs bumped proxmox_notify)
>>>   - libpve-notify-perl  (needs bumped 
>>> libproxmox-rs-perl/libpve-rs-perl)
>>>   - pve-ha-manager (needs bumped libpve-notify-perl)
>>>   - pve-manager (needs bumped libpve-notify-perl)
>>>   - proxmox-mail-forward (optional. should not be affected by any 
>>> changes, but I think
>>> it should be also be bumped for any larger proxmox_notify changes 
>>> to avoid any
>>> weird hidden regressions due to mismatches of proxmox_notify
>>>
>>
>> Versioned breaks required:
>> - new pve-cluster will break old pve-manager and old pve-ha-manager
>> - new libproxmox-rs-perl/libpve-rs-perl will break old pve-cluster
>>
>> Still not too sure about putting the templates in
>> /usr/share/proxmox-ve/, but maybe @Thomas or @Fabian can chime in on that.
> 
> without in-depth look at all the changes in this series, I think it
> would make most sense for the package shipping (most?) templates to
> "own" the dir where they are shipped. that seems to be pve-manager, so
> maybe /usr/share/pve-manager would be an okay fit?
> 

Okay, I think I'll just use pve-manager then.

Thanks!

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH many 00/19] notifications: move template strings to template files; PBS preparations

2024-04-19 Thread Fabian Grünbichler
On April 19, 2024 12:09 pm, Fiona Ebner wrote:
> Am 09.04.24 um 15:25 schrieb Lukas Wagner:
>> Bumps/dependencies:
>>   - proxmox_notify
>>   - libproxmox-rs-perl/libpve-rs-perl (needs bumped proxmox_notify)
>>   - libpve-notify-perl  (needs bumped 
>> libproxmox-rs-perl/libpve-rs-perl)
>>   - pve-ha-manager (needs bumped libpve-notify-perl)
>>   - pve-manager (needs bumped libpve-notify-perl)
>>   - proxmox-mail-forward (optional. should not be affected by any 
>> changes, but I think
>> it should be also be bumped for any larger proxmox_notify changes to 
>> avoid any
>> weird hidden regressions due to mismatches of proxmox_notify
>> 
> 
> Versioned breaks required:
> - new pve-cluster will break old pve-manager and old pve-ha-manager
> - new libproxmox-rs-perl/libpve-rs-perl will break old pve-cluster
> 
> Still not too sure about putting the templates in
> /usr/share/proxmox-ve/, but maybe @Thomas or @Fabian can chime in on that.

without in-depth look at all the changes in this series, I think it
would make most sense for the package shipping (most?) templates to
"own" the dir where they are shipped. that seems to be pve-manager, so
maybe /usr/share/pve-manager would be an okay fit?

or, if we want to avoid having "pve-manager" there, we could of course
also create /usr/share/pve or /usr/lib/pve or one of those with
"proxmox" instead of "pve", and have that owned by pve-manager..

I dislike proxmox-ve
- it might not be the case that it is always installed, which means
  potential pitfalls for such non-standard systems or if we ever make it
  optional like we did for PMG/PBS (granted, not very likely, but
  still..)
- normally the /usr/share/$package dir should only be used by $package


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC PATCH docs-common 01/13] installation-media: move page from pve-docs here

2024-04-19 Thread Christoph Heiss
Thanks for the review!

On Fri, Apr 19, 2024 at 12:51:07PM +0200, Aaron Lauterer wrote:
>
>
> On  2024-04-19  11:05, Christoph Heiss wrote:
> > Small adaptions were necessary; mostly a s/{pve}/{product}/g and
> > replacing the ISO URL with the {iso-url} variable.
>
> except there are still plenty of `{pve}`s in there?

Oh right, that is a leftover from splitting the patch into to separate
commits. I'll remove that with the next revision. Sorry for the
confusion.

>
> another thing looking at this patch, how do we handle product specifics?
> having a ton of variables that are set according to the product, might be
> cumbersome.
>
> Most likely something like `ifdef:product-pve` and so forth would be useful.

Depending on the amount of specifics, that's were splitting sections out
into partials (into proxmox-docs-common) and then including them into
the main page (in the product-specific docs) come into play.

E.g. the installer page is a good example where I tried to apply this
pattern. Some things like the installer flow and advanced options are
sharable, there are still some sections that are specific to e.g. PVE.

As for e.g. the `{pve}` macro, there is now simply a `{product}` (and
`{product-short}` macro, which already handles most other, trivial
differences.

But overall, that is definitely a point to discuss further and improve
upon incrementally as pages/sections are moved and pain points are
discovered, IMO.

>
> some situations I spotted where we would probably need it in this patched
> marked below:

All of these should be fixed up/adapted in the next patch in the series.
I've split them into two for review sake, where the first is a 1:1 copy
and the next patch then adapts it.

[..]
> > +
> > +ifdef::wiki[]
> > +Boot your Server from the USB Flash Drive
> > +~
> > +
> > +Connect the USB flash drive to your server and make sure that booting from 
> > USB
> > +is enabled (check your servers firmware settings). Then follow the steps 
> > in the
> > +xref:chapter_installation[installation wizard].
>
> Aligning chapter references will also be some work, especially if we want to
> keep old direct links still working.

That's a very good point, thanks for noticing!
I'll definitely keep note of that, but probably would deal with that as
we come to that.

>
> > +
> > +endif::wiki[]
> > --
> > 2.44.0
> >
> >
> >
> > ___
> > pve-devel mailing list
> > pve-devel@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >
> >


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v6 1/3] add C program to get AMD SEV hardware parameters from CPUID

2024-04-19 Thread Markus Frank
Implement a systemd service that runs a C program that extracts AMD SEV
hardware parameters such as reduced-phys-bios and cbitpos from CPUID at boot
time, looks if SEV, SEV-ES & SEV-SNP are enabled, and outputs these details
as JSON to /run/amd-sev-params.

Signed-off-by: Markus Frank 
---
 Makefile|  1 +
 amd-sev-support/Makefile| 21 +++
 amd-sev-support/amd-sev-support.c   | 48 +
 amd-sev-support/amd-sev-support.service | 12 +++
 4 files changed, 82 insertions(+)
 create mode 100644 amd-sev-support/Makefile
 create mode 100644 amd-sev-support/amd-sev-support.c
 create mode 100644 amd-sev-support/amd-sev-support.service

diff --git a/Makefile b/Makefile
index 133468d..ccd12a1 100644
--- a/Makefile
+++ b/Makefile
@@ -65,6 +65,7 @@ install: $(PKGSOURCES)
install -m 0644 -D bootsplash.jpg $(DESTDIR)/usr/share/$(PACKAGE)
$(MAKE) -C PVE install
$(MAKE) -C qmeventd install
+   $(MAKE) -C amd-sev-support install
$(MAKE) -C qemu-configs install
$(MAKE) -C vm-network-scripts install
install -m 0755 qm $(DESTDIR)$(SBINDIR)
diff --git a/amd-sev-support/Makefile b/amd-sev-support/Makefile
new file mode 100644
index 000..022ed94
--- /dev/null
+++ b/amd-sev-support/Makefile
@@ -0,0 +1,21 @@
+DESTDIR=
+PREFIX=/usr
+SBINDIR=${PREFIX}/libexec/qemu-server
+SERVICEDIR=/lib/systemd/system
+
+CC ?= gcc
+CFLAGS += -O2 -fanalyzer -Werror -Wall -Wextra -Wpedantic -Wtype-limits 
-Wl,-z,relro -std=gnu11
+
+amd-sev-support: amd-sev-support.c
+   $(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
+
+.PHONY: install
+install: amd-sev-support
+   install -d ${DESTDIR}/${SBINDIR}
+   install -d ${DESTDIR}${SERVICEDIR}
+   install -m 0644 amd-sev-support.service ${DESTDIR}${SERVICEDIR}
+   install -m 0755 amd-sev-support ${DESTDIR}${SBINDIR}
+
+.PHONY: clean
+clean:
+   rm -f amd-sev-support
diff --git a/amd-sev-support/amd-sev-support.c 
b/amd-sev-support/amd-sev-support.c
new file mode 100644
index 000..73a7bd8
--- /dev/null
+++ b/amd-sev-support/amd-sev-support.c
@@ -0,0 +1,48 @@
+#include 
+#include 
+#include 
+
+int main() {
+uint32_t eax, ebx, ecx, edx;
+
+// query Encrypted Memory Capabilities, see:
+// 
https://en.wikipedia.org/wiki/CPUID#EAX=801Fh:_Encrypted_Memory_Capabilities
+uint32_t query_function = 0x801F;
+asm volatile("cpuid"
+: "=a"(eax), "=b"(ebx), "=c"(ecx), "=d"(edx)
+: "0"(query_function)
+);
+
+bool sev_support = (eax & (1<<1)) != 0;
+bool sev_es_support = (eax & (1<<3)) != 0;
+bool sev_snp_support = (eax & (1<<4)) != 0;
+
+uint8_t cbitpos = ebx & 0x3f;
+uint8_t reduced_phys_bits = (ebx >> 6) & 0x3f;
+
+FILE *file;
+char *filename = "/run/amd-sev-params";
+
+file = fopen(filename, "w");
+if (file == NULL) {
+   perror("Error opening file");
+   return 1;
+}
+
+fprintf(file, "{"
+   " \"cbitpos\": %u,"
+   " \"reduced-phys-bits\": %u,"
+   " \"sev\": %s,"
+   " \"sev-es\": %s,"
+   " \"sev-snp\": %s"
+   " }\n",
+   cbitpos,
+   reduced_phys_bits,
+   sev_support ? "true" : "false",
+   sev_es_support ? "true" : "false",
+   sev_snp_support ? "true" : "false"
+);
+
+fclose(file);
+return 0;
+}
diff --git a/amd-sev-support/amd-sev-support.service 
b/amd-sev-support/amd-sev-support.service
new file mode 100644
index 000..466dd0a
--- /dev/null
+++ b/amd-sev-support/amd-sev-support.service
@@ -0,0 +1,12 @@
+[Unit]
+Description=Read AMD SEV Parameters
+RequiresMountsFor=/run
+Before=pve-ha-lrm.service
+Before=pve-guests.service
+
+[Service]
+ExecStart=/usr/libexec/qemu-server/amd-sev-support
+Type=forking
+
+[Install]
+WantedBy=multi-user.target
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v6 2/3] config: QEMU AMD SEV enable

2024-04-19 Thread Markus Frank
This patch is for enabling AMD SEV (Secure Encrypted
Virtualization) support in QEMU

VM-Config-Examples:
amd_sev: type=std,nodbg=1,noks=1
amd_sev: es,nodbg=1,kernel-hashes=1

Node-Config-Example (gets generated automatically):
amd_sev: cbitpos=47,reduced-phys-bios=1

kernel-hashes, reduced-phys-bios & cbitpos correspond to the varibles
with the same name in qemu.

kernel-hashes=1 adds kernel-hashes to enable measured linux kernel
launch since it is per default off for backward compatibility.

reduced-phys-bios and cbitpos are system specific and are read out by
a C program started by the amd-sev-support.service on boot and saved
to the /run/amd-sev-params file. This file is parsed and than used by
qemu-server to correctly start a AMD SEV VM.

type=std stands for standard sev to differentiate it from sev-es (es)
or sev-snp (snp) when support is upstream.

QEMU's sev-guest policy gets calculated with the parameters nodbg & noks
These parameters correspond to policy-bits 0 & 1.
If type is 'es' than policy-bit 2 gets set to 1 to activate SEV-ES.
Policy bit 3 (nosend) is always set to 1, because migration
features for sev are not upstream yet and are attackable.

SEV-ES is highly experimental since it could not be tested.

see coherent doc patch

Signed-off-by: Markus Frank 
---
v6:
* rebase on master
* removed unused $sev_node_fmt object

v5:
* parse /run/amd-sev-params for hardware parameters
* removed NodeConfig dependency
* only disallow live-migration and snapshots with vmstate
  -> allow offline migration and snapshots without vmstate

v4:
* reduced lines of code
* added text that SEV-ES is experimental

 PVE/API2/Qemu.pm   | 10 ++
 PVE/QemuMigrate.pm |  4 +++
 PVE/QemuServer.pm  | 83 ++
 3 files changed, 97 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 2a349c8..2e8d654 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4512,6 +4512,10 @@ __PACKAGE__->register_method({
push $local_resources->@*, "clipboard=vnc";
}
 
+   if ($res->{running} && $vmconf->{amd_sev}) {
+   push $local_resources->@*, "amd_sev";
+   }
+
# if vm is not running, return target nodes where local storage/mapped 
devices are available
# for offline migration
if (!$res->{running}) {
@@ -5192,6 +5196,12 @@ __PACKAGE__->register_method({
die "unable to use snapshot name 'pending' (reserved name)\n"
if lc($snapname) eq 'pending';
 
+   my $conf = PVE::QemuConfig->load_config($vmid);
+   if ($param->{vmstate} && $conf->{amd_sev}) {
+   die "Snapshots that include memory are not supported while memory"
+   ." is encrypted by AMD SEV.\n"
+   }
+
my $realcmd = sub {
PVE::Cluster::log_msg('info', $authuser, "snapshot VM $vmid: 
$snapname");
PVE::QemuConfig->snapshot_create($vmid, $snapname, 
$param->{vmstate},
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 8d9b35a..7db18b2 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -260,6 +260,10 @@ sub prepare {
die "VMs with 'clipboard' set to 'vnc' are not live migratable!\n";
 }
 
+if ($running && $conf->{'amd_sev'}) {
+   die "VMs with AMD SEV are not live migratable!\n";
+}
+
 my $vollist = PVE::QemuServer::get_vm_volumes($conf);
 
 my $storages = {};
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 28e630d..b6e2713 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -177,6 +177,40 @@ my $agent_fmt = {
 },
 };
 
+my $sev_fmt = {
+type => {
+   description => "Enable standard SEV with type='std' or enable 
experimental SEV-ES"
+   ." with the 'es' option.",
+   type => 'string',
+   default_key => 1,
+   format_description => "qemu-sev-type",
+   enum => ['std', 'es'],
+   maxLength => 3,
+},
+nodbg => {
+   description => "Sets policy bit 0 to 1 to disallow debugging of guest",
+   type => 'boolean',
+   format_description => "qemu-sev-nodbg",
+   default => 0,
+   optional => 1,
+},
+noks => {
+   description => "Sets policy bit 1 to 1 to disallow key sharing with 
other guests",
+   type => 'boolean',
+   format_description => "qemu-sev-noks",
+   default => 0,
+   optional => 1,
+},
+"kernel-hashes" => {
+   description => "Add kernel hashes to guest firmware for measured linux 
kernel launch",
+   type => 'boolean',
+   format_description => "qemu-sev-kernel-hashes",
+   default => 0,
+   optional => 1,
+},
+};
+PVE::JSONSchema::register_format('pve-qemu-sev-fmt', $sev_fmt);
+
 my $vga_fmt = {
 type => {
description => "Select the VGA type.",
@@ -358,6 +392,12 @@ my $confdesc = {
description => "Memory properties.",
format => $PVE::QemuServer::Memory::memory_fmt
 },
+amd_sev => {
+   description => "Secure Encrypted Virtualization (SEV) features 

[pve-devel] [PATCH docs v6 3/3] add AMD SEV documentation

2024-04-19 Thread Markus Frank
add documentation for the "[PATCH qemu-server] config: QEMU AMD SEV enable"
patch.

Signed-off-by: Markus Frank 
---
v5:
* removed NodeConfig part

v4:
* added text that SEV-ES is experimental

 qm.adoc | 103 
 1 file changed, 103 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 45e3a57..6590373 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -715,6 +715,109 @@ systems.
 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
 of RAM available to the host.
 
+[[qm_memory_encryption]]
+Memory Encryption
+~
+
+[[qm_memory_encryption_sev]]
+AMD SEV
+^^^
+
+SEV (Secure Encrypted Virtualization) enables memory encryption per VM using
+AES-128 encryption and the AMD Secure Processor.
+
+SEV-ES (Secure Encrypted Virtualization-Encrypted State) in addition encrypts
+all CPU register contents when a VM stops running, to prevent leakage of
+information to the hypervisor. This feature is very experimental.
+
+*Host Requirements:*
+
+* AMD EPYC CPU
+* SEV-ES is only supported on AMD EPYC 7xx2 and newer
+* configure AMD memory encryption in the BIOS settings of the host machine
+* add "kvm_amd.sev=1" to kernel parameters if not enabled by default
+* add "mem_encrypt=on" to kernel parameters if you want to encrypt memory on 
the
+host (SME) see 
https://www.kernel.org/doc/Documentation/x86/amd-memory-encryption.txt
+* maybe increase SWIOTLB see https://github.com/AMDESE/AMDSEV#faq-4
+
+To check if SEV is enabled on the host search for `sev` in dmesg and print out
+the SEV kernel parameter of kvm_amd:
+
+
+# dmesg | grep -i sev
+[...] ccp :45:00.1: sev enabled
+[...] ccp :45:00.1: SEV API: 
+[...] SEV supported:  ASIDs
+[...] SEV-ES supported:  ASIDs
+# cat /sys/module/kvm_amd/parameters/sev
+Y
+
+
+*Guest Requirements:*
+
+* edk2-OVMF
+* advisable to use Q35
+* The guest operating system must contain SEV-support.
+
+*Limitations:*
+
+* Because the memory is encrypted the memory usage on host is always wrong.
+* Operations that involve saving or restoring memory like snapshots
+& live migration do not work yet or are attackable.
+https://github.com/PSPReverse/amd-sev-migration-attack
+* PCI passthrough is not supported.
+* SEV-ES is very experimental.
+* QEMU & AMD-SEV documentation is very limited.
+
+Example Configuration:
+
+
+# qm set  -amd_sev type=std,nodbg=1,noks=1,kernel-hashes=1
+
+
+The *type* defines the encryption technology ("type=" is not necessary).
+Available options are std & es.
+
+The QEMU *policy* parameter gets calculated with the *nodbg* and *noks*
+parameters. These parameters correspond to policy-bit 0 and 1. If *type* is 
*es*
+the policy-bit 2 is set to 1 so that SEV-ES is enabled. Policy-bit 3 (nosend) 
is
+always set to 1 to prevent migration-attacks. For more information on how to
+calculate the policy see:
+https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf[AMD
 SEV API Specification Chapter 3]
+
+The *kernel-hashes* is per default off for backward compatibility with older
+OVMF images and guests that do not measure the kernel/initrd.
+See https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg02598.html
+
+*Check if SEV is working on the guest*
+
+Method 1 - dmesg:
+
+Output should look like this.
+
+
+# dmesg | grep -i sev
+AMD Memory Encryption Features active: SEV
+
+
+Method 2 - MSR 0xc0010131 (MSR_AMD64_SEV):
+
+Output should be 1.
+
+
+# apt install msr-tools
+# modprobe msr
+# rdmsr -a 0xc0010131
+1
+
+
+Links:
+
+* https://developer.amd.com/sev/
+* https://github.com/AMDESE/AMDSEV
+* https://www.qemu.org/docs/master/system/i386/amd-memory-encryption.html
+* https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf
+* https://documentation.suse.com/sles/15-SP1/html/SLES-amd-sev/index.html
 
 [[qm_network_device]]
 Network Device
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC PATCH docs-common 01/13] installation-media: move page from pve-docs here

2024-04-19 Thread Aaron Lauterer




On  2024-04-19  11:05, Christoph Heiss wrote:

Small adaptions were necessary; mostly a s/{pve}/{product}/g and
replacing the ISO URL with the {iso-url} variable.


except there are still plenty of `{pve}`s in there?

another thing looking at this patch, how do we handle product specifics?
having a ton of variables that are set according to the product, might 
be cumbersome.


Most likely something like `ifdef:product-pve` and so forth would be useful.

some situations I spotted where we would probably need it in this 
patched marked below:




Signed-off-by: Christoph Heiss 
---
  installation-media.adoc | 132 
  1 file changed, 132 insertions(+)
  create mode 100644 installation-media.adoc

diff --git a/installation-media.adoc b/installation-media.adoc
new file mode 100644
index 000..a1c9402
--- /dev/null
+++ b/installation-media.adoc
@@ -0,0 +1,132 @@
+[[installation_prepare_media]]
+Prepare Installation Media
+--
+ifdef::wiki[]
+:pve-toplevel:
+endif::wiki[]
+
+Download the installer ISO image from: 
{website}en/downloads/proxmox-virtual-environment/iso


^actual download link for each product


+
+The {pve} installation media is a hybrid ISO image. It works in two ways:
+
+* An ISO image file ready to burn to a CD or DVD.
+
+* A raw sector (IMG) image file ready to copy to a USB flash drive (USB stick).
+
+Using a USB flash drive to install {pve} is the recommended way because it is
+the faster option.
+
+Prepare a USB Flash Drive as Installation Medium
+
+
+The flash drive needs to have at least 1 GB of storage available.
+
+NOTE: Do not use UNetbootin. It does not work with the {pve} installation 
image.
+
+IMPORTANT: Make sure that the USB flash drive is not mounted and does not
+contain any important data.
+
+
+Instructions for GNU/Linux
+~~
+
+On Unix-like operating system use the `dd` command to copy the ISO image to the
+USB flash drive. First find the correct device name of the USB flash drive (see
+below). Then run the `dd` command.
+
+
+# dd bs=1M conv=fdatasync if=./proxmox-ve_*.iso of=/dev/XYZ


^ useful example, though this one could also be helped with by putting 
the 've_' part into the glob as well

+
+
+NOTE: Be sure to replace /dev/XYZ with the correct device name and adapt the
+input filename ('if') path.
+
+CAUTION: Be very careful, and do not overwrite the wrong disk!
+
+
+Find the Correct USB Device Name
+
+There are two ways to find out the name of the USB flash drive. The first one 
is
+to compare the last lines of the `dmesg` command output before and after
+plugging in the flash drive. The second way is to compare the output of the
+`lsblk` command. Open a terminal and run:
+
+
+# lsblk
+
+
+Then plug in your USB flash drive and run the command again:
+
+
+# lsblk
+
+
+A new device will appear. This is the one you want to use. To be on the extra
+safe side check if the reported size matches your USB flash drive.
+
+
+Instructions for macOS
+~~
+
+Open the terminal (query Terminal in Spotlight).
+
+Convert the `.iso` file to `.dmg` format using the convert option of `hdiutil`,
+for example:
+
+
+# hdiutil convert proxmox-ve_*.iso -format UDRW -o proxmox-ve_*.dmg
+
+
+TIP: macOS tends to automatically add '.dmg' to the output file name.
+
+To get the current list of devices run the command:
+
+
+# diskutil list
+
+
+Now insert the USB flash drive and run this command again to determine which
+device node has been assigned to it. (e.g., /dev/diskX).
+
+
+# diskutil list
+# diskutil unmountDisk /dev/diskX
+
+
+NOTE: replace X with the disk number from the last command.
+
+
+# sudo dd if=proxmox-ve_*.dmg bs=1M of=/dev/rdiskX
+
+
+NOTE: 'rdiskX', instead of 'diskX', in the last command is intended. It will
+increase the write speed.
+
+Instructions for Windows
+
+
+Using Etcher
+
+
+Etcher works out of the box. Download Etcher from https://etcher.io. It will
+guide you through the process of selecting the ISO and your USB flash drive.
+
+Using Rufus
+^^^
+
+Rufus is a more lightweight alternative, but you need to use the *DD mode* to
+make it work. Download Rufus from https://rufus.ie/. Either install it or use
+the portable version. Select the destination drive and the {pve} ISO file.
+
+IMPORTANT: Once you 'Start' you have to click 'No' on the dialog asking to
+download a different version of GRUB. In the next dialog select the 'DD' mode.
+
+ifdef::wiki[]
+Boot your Server from the USB Flash Drive
+~
+
+Connect the USB flash drive to your server and make sure that booting from USB
+is enabled (check your servers firmware settings). Then follow the steps in the
+xref:chapter_installation[installation wizard].


Aligning chapter references will also 

[pve-devel] applied: [PATCH manager v5 01/16] api: notifications: add 'smtp' to target index

2024-04-19 Thread Fiona Ebner
Am 15.04.24 um 10:25 schrieb Lukas Wagner:
> Signed-off-by: Lukas Wagner 
> ---
>  PVE/API2/Cluster/Notifications.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Cluster/Notifications.pm 
> b/PVE/API2/Cluster/Notifications.pm
> index 7047f0b1..68fdda2a 100644
> --- a/PVE/API2/Cluster/Notifications.pm
> +++ b/PVE/API2/Cluster/Notifications.pm
> @@ -107,6 +107,7 @@ __PACKAGE__->register_method ({
>   my $result = [
>   { name => 'gotify' },
>   { name => 'sendmail' },
> + { name => 'smtp' },
>   ];
>  
>   return $result;
> @@ -143,7 +144,7 @@ __PACKAGE__->register_method ({
>   'type' => {
>   description => 'Type of the target.',
>   type  => 'string',
> - enum => [qw(sendmail gotify)],
> + enum => [qw(sendmail gotify smtp)],
>   },
>   'comment' => {
>   description => 'Comment',

applied this fix already, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 19/19] notifications: use named templates instead of in-code templates

2024-04-19 Thread Lukas Wagner



On  2024-04-19 11:59, Fiona Ebner wrote:
> Am 09.04.24 um 15:25 schrieb Lukas Wagner:
>> diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
>> index 152eb3e5..2ea626f0 100644
>> --- a/PVE/VZDump.pm
>> +++ b/PVE/VZDump.pm
> 
> The existing $subject_template and $body_template could be removed now
> like for the others
> 
>> diff --git a/templates/Makefile b/templates/Makefile
>> new file mode 100644
>> index ..b0add81e
>> --- /dev/null
>> +++ b/templates/Makefile
>> @@ -0,0 +1,24 @@
>> +NOTIFICATION_TEMPLATES= 
>> \
>> +default/test-subject.txt.hbs\
>> +default/test-body.txt.hbs   \
>> +default/test-body.html.hbs  \
>> +default/vzdump-subject.txt.hbs  \
>> +default/vzdump-body.txt.hbs \
>> +default/vzdump-body.html.hbs\
>> +default/replication-subject.txt.hbs \
>> +default/replication-body.txt.hbs\
>> +default/replication-body.html.hbs   \
>> +default/package-updates-subject.txt.hbs \
>> +default/package-updates-body.txt.hbs\
>> +default/package-updates-body.html.hbs   \
>> +
> 
> Nit: backslashes are not aligned
> 
>> diff --git a/templates/default/replication-body.html.hbs 
>> b/templates/default/replication-body.html.hbs
>> new file mode 100644
>> index ..d1dea6a1
>> --- /dev/null
>> +++ b/templates/default/replication-body.html.hbs
>> @@ -0,0 +1,18 @@
>> +
>> +
>> +Replication job '{{job-id}}' with target '{{job-target}}' and 
>> schedule '{{job-schedule}}' failed!
>> +
>> +Last successful sync: {{timestamp last-sync}}
>> +Next sync try: {{timestamp next-sync}}
>> +Failure count: {{failure-count}}
>> +
>> +{{#if (eq failure-count 3)}}
>> +Note: The system  will now reduce the frequency of error 
>> reports, as the job appears to be stuck.
>> +{{/if}}
>> +
>> +Error:
>> +
>> +{{error}}
> 
> Nit: is there a special reason for not indenting this?

Yes, since it's in a  block, the first line of the logs would be displayed 
indented otherwise.

-- 
- Lukas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 04/16] ui: dc: backup: allow to set custom job id in advanced settings

2024-04-19 Thread Fiona Ebner
Am 15.04.24 um 10:26 schrieb Lukas Wagner:
> This might be useful if somebody wants to match on the new
> 'backup-job' field in a notification match rule.
> 
> Signed-off-by: Lukas Wagner 
> ---
>  www/manager6/Utils.js |  4 
>  www/manager6/dc/Backup.js | 11 +++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 287d651a..d4b5f3e6 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1952,6 +1952,10 @@ Ext.define('PVE.Utils', {
>  singleton: true,
>  constructor: function() {
>   var me = this;
> +
> + // Same regex as 'pve-configid
> + me.CONFIGID_RE = /^[A-Za-z][A-Za-z0-9_-]+$/;

This already exists (with nice verification errors), by using
vtype: 'ConfigId'
for the field. It's defined in widget-toolkit, Toolkit.js

> +
>   Ext.apply(me, me.utilities);
>  
>   Proxmox.Utils.override_task_descriptions({
> diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup.js
> index 4beb84c0..5b6f6688 100644
> --- a/www/manager6/dc/Backup.js
> +++ b/www/manager6/dc/Backup.js
> @@ -397,6 +397,17 @@ Ext.define('PVE.dc.BackupEdit', {
>   },
>   ],
>   advancedColumn1: [
> + {
> + xtype: 'pmxDisplayEditField',
> + fieldLabel: gettext('Job ID'),
> + emptyText: gettext('Autogenerate'),
> + name: 'id',
> + allowBlank: true,
> + regex: PVE.Utils.CONFIGID_RE,
> + cbind: {
> + editable: '{isCreate}',
> + },
> + },
>   {
>   xtype: 'proxmoxcheckbox',
>   fieldLabel: gettext('Repeat missed'),


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v5 04/16] ui: dc: backup: allow to set custom job id in advanced settings

2024-04-19 Thread Fiona Ebner
Am 15.04.24 um 10:26 schrieb Lukas Wagner:
> This might be useful if somebody wants to match on the new
> 'backup-job' field in a notification match rule.
> 
> Signed-off-by: Lukas Wagner 

Needs a rebase, because the advanced settings were moved to a new
"Advanced" tab.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v3 5/5] fix #4474: ui: guest stop: offer to overrule active shutdown tasks

2024-04-19 Thread Dominik Csapak

some minor nits inline, aside from those

Reviewed-by: Dominik Csapak 


On 4/12/24 16:15, Friedrich Weber wrote:

Implement a new "guest stop" confirmation message box which first
checks if there is an active shutdown task for the same guest that is
visible to the logged-in user. If there is at least one, the dialog
displays an additional default-on checkbox for overruling active
shutdown tasks. If the user confirms and the checkbox is checked, the
UI sends a guest stop API request with the `overrule-shutdown`
parameter set to 1. If there are no active shutdown tasks, or the
checkbox is unchecked, the UI sends a guest stop API request without
`overrule-shutdown`.

To avoid an additional API request for querying active shutdown tasks,
check the UI's current view of cluster tasks instead, which is fetched
from the `pve-cluster-tasks` store.

As the UI might hold an outdated task list, there are some
opportunities for races, e.g., the UI may miss a new shutdown task or
consider a shutdown task active even though it has already terminated.
These races either result in a surviving shutdown task that the user
still needs to abort manually, or a superfluous `override-shutdown=1`
parameter that does not actually abort any tasks. Since "stop
overrules shutdown" is merely a convenience feature, both outcomes
seem bearable.

The confirmation message box is now always marked as dangerous (with a
warning sign icon), whereas previously it was only marked dangerous if
the stop issued from the guest panel, but not when issued from the
resource tree command menu.

Signed-off-by: Friedrich Weber 
---

Notes:
 changes v2 -> v3:
 - aligned permission checks with changed backend logic
 - replace access of `this.vm` with `me.vm`
 
 changes v1 -> v2:

 - instead of a second modal dialog that offers to overrule shutdown
   tasks, display an additional checkbox in the "guest stop"
   confirmation dialog if there is a running matching shutdown task
 - spin out pve-cluster-tasks store fix into its own patch

  www/manager6/Makefile|  1 +
  www/manager6/lxc/CmdMenu.js  |  8 +++-
  www/manager6/lxc/Config.js   |  8 ++--
  www/manager6/qemu/CmdMenu.js |  8 +++-
  www/manager6/qemu/Config.js  |  8 ++--
  www/manager6/window/GuestStop.js | 79 
  6 files changed, 104 insertions(+), 8 deletions(-)
  create mode 100644 www/manager6/window/GuestStop.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index c756cae6..30c56bb9 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -131,6 +131,7 @@ JSSRC=  
\
window/ScheduleSimulator.js \
window/Wizard.js\
window/GuestDiskReassign.js \
+   window/GuestStop.js \
window/TreeSettingsEdit.js  \
window/PCIMapEdit.js\
window/USBMapEdit.js\
diff --git a/www/manager6/lxc/CmdMenu.js b/www/manager6/lxc/CmdMenu.js
index b1403fc6..76b39423 100644
--- a/www/manager6/lxc/CmdMenu.js
+++ b/www/manager6/lxc/CmdMenu.js
@@ -66,7 +66,13 @@ Ext.define('PVE.lxc.CmdMenu', {
iconCls: 'fa fa-fw fa-stop',
disabled: stopped,
tooltip: Ext.String.format(gettext('Stop {0} immediately'), 
'CT'),
-   handler: () => confirmedVMCommand('stop'),
+   handler: () => {
+   Ext.create('PVE.GuestStop', {
+   nodename: info.node,
+   vm: info,
+   autoShow: true,
+   });
+   },
},
{
text: gettext('Reboot'),
diff --git a/www/manager6/lxc/Config.js b/www/manager6/lxc/Config.js
index 4516ee8f..082913e1 100644
--- a/www/manager6/lxc/Config.js
+++ b/www/manager6/lxc/Config.js
@@ -77,11 +77,13 @@ Ext.define('PVE.lxc.Config', {
{
text: gettext('Stop'),
disabled: !caps.vms['VM.PowerMgmt'],
-   confirmMsg: Proxmox.Utils.format_task_description('vzstop', 
vmid),
tooltip: Ext.String.format(gettext('Stop {0} immediately'), 
'CT'),
-   dangerous: true,
handler: function() {
-   vm_command("stop");
+   Ext.create('PVE.GuestStop', {
+   nodename: nodename,
+   vm: vm,
+   autoShow: true,
+   });
},
iconCls: 'fa fa-stop',
}],
diff --git a/www/manager6/qemu/CmdMenu.js b/www/manager6/qemu/CmdMenu.js
index 4f59d5f7..834577e7 100644
--- a/www/manager6/qemu/CmdMenu.js
+++ b/www/manager6/qemu/CmdMenu.js
@@ -93,7 +93,13 @@ 

[pve-devel] [PATCH docs v5 3/3] add AMD SEV documentation

2024-04-19 Thread Markus Frank
add documentation for the "[PATCH qemu-server] config: QEMU AMD SEV enable"
patch.

Signed-off-by: Markus Frank 
---
v5:
* removed NodeConfig part

v4:
* added text that SEV-ES is experimental

 qm.adoc | 103 
 1 file changed, 103 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 45e3a57..6590373 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -715,6 +715,109 @@ systems.
 When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB
 of RAM available to the host.
 
+[[qm_memory_encryption]]
+Memory Encryption
+~
+
+[[qm_memory_encryption_sev]]
+AMD SEV
+^^^
+
+SEV (Secure Encrypted Virtualization) enables memory encryption per VM using
+AES-128 encryption and the AMD Secure Processor.
+
+SEV-ES (Secure Encrypted Virtualization-Encrypted State) in addition encrypts
+all CPU register contents when a VM stops running, to prevent leakage of
+information to the hypervisor. This feature is very experimental.
+
+*Host Requirements:*
+
+* AMD EPYC CPU
+* SEV-ES is only supported on AMD EPYC 7xx2 and newer
+* configure AMD memory encryption in the BIOS settings of the host machine
+* add "kvm_amd.sev=1" to kernel parameters if not enabled by default
+* add "mem_encrypt=on" to kernel parameters if you want to encrypt memory on 
the
+host (SME) see 
https://www.kernel.org/doc/Documentation/x86/amd-memory-encryption.txt
+* maybe increase SWIOTLB see https://github.com/AMDESE/AMDSEV#faq-4
+
+To check if SEV is enabled on the host search for `sev` in dmesg and print out
+the SEV kernel parameter of kvm_amd:
+
+
+# dmesg | grep -i sev
+[...] ccp :45:00.1: sev enabled
+[...] ccp :45:00.1: SEV API: 
+[...] SEV supported:  ASIDs
+[...] SEV-ES supported:  ASIDs
+# cat /sys/module/kvm_amd/parameters/sev
+Y
+
+
+*Guest Requirements:*
+
+* edk2-OVMF
+* advisable to use Q35
+* The guest operating system must contain SEV-support.
+
+*Limitations:*
+
+* Because the memory is encrypted the memory usage on host is always wrong.
+* Operations that involve saving or restoring memory like snapshots
+& live migration do not work yet or are attackable.
+https://github.com/PSPReverse/amd-sev-migration-attack
+* PCI passthrough is not supported.
+* SEV-ES is very experimental.
+* QEMU & AMD-SEV documentation is very limited.
+
+Example Configuration:
+
+
+# qm set  -amd_sev type=std,nodbg=1,noks=1,kernel-hashes=1
+
+
+The *type* defines the encryption technology ("type=" is not necessary).
+Available options are std & es.
+
+The QEMU *policy* parameter gets calculated with the *nodbg* and *noks*
+parameters. These parameters correspond to policy-bit 0 and 1. If *type* is 
*es*
+the policy-bit 2 is set to 1 so that SEV-ES is enabled. Policy-bit 3 (nosend) 
is
+always set to 1 to prevent migration-attacks. For more information on how to
+calculate the policy see:
+https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf[AMD
 SEV API Specification Chapter 3]
+
+The *kernel-hashes* is per default off for backward compatibility with older
+OVMF images and guests that do not measure the kernel/initrd.
+See https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg02598.html
+
+*Check if SEV is working on the guest*
+
+Method 1 - dmesg:
+
+Output should look like this.
+
+
+# dmesg | grep -i sev
+AMD Memory Encryption Features active: SEV
+
+
+Method 2 - MSR 0xc0010131 (MSR_AMD64_SEV):
+
+Output should be 1.
+
+
+# apt install msr-tools
+# modprobe msr
+# rdmsr -a 0xc0010131
+1
+
+
+Links:
+
+* https://developer.amd.com/sev/
+* https://github.com/AMDESE/AMDSEV
+* https://www.qemu.org/docs/master/system/i386/amd-memory-encryption.html
+* https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf
+* https://documentation.suse.com/sles/15-SP1/html/SLES-amd-sev/index.html
 
 [[qm_network_device]]
 Network Device
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v5 1/3] add C program to get AMD SEV hardware parameters from CPUID

2024-04-19 Thread Markus Frank
Implement a systemd service that runs a C program that extracts AMD SEV
hardware parameters such as reduced-phys-bios and cbitpos from CPUID at boot
time, verifies that SEV, SEV-ES & SEV-SNP are enabled, and outputs these details
as JSON to /run/amd-sev-params.

Signed-off-by: Markus Frank 
---
 Makefile|  1 +
 amd-sev-support/Makefile| 21 +++
 amd-sev-support/amd-sev-support.c   | 48 +
 amd-sev-support/amd-sev-support.service | 12 +++
 4 files changed, 82 insertions(+)
 create mode 100644 amd-sev-support/Makefile
 create mode 100644 amd-sev-support/amd-sev-support.c
 create mode 100644 amd-sev-support/amd-sev-support.service

diff --git a/Makefile b/Makefile
index 133468d..ccd12a1 100644
--- a/Makefile
+++ b/Makefile
@@ -65,6 +65,7 @@ install: $(PKGSOURCES)
install -m 0644 -D bootsplash.jpg $(DESTDIR)/usr/share/$(PACKAGE)
$(MAKE) -C PVE install
$(MAKE) -C qmeventd install
+   $(MAKE) -C amd-sev-support install
$(MAKE) -C qemu-configs install
$(MAKE) -C vm-network-scripts install
install -m 0755 qm $(DESTDIR)$(SBINDIR)
diff --git a/amd-sev-support/Makefile b/amd-sev-support/Makefile
new file mode 100644
index 000..022ed94
--- /dev/null
+++ b/amd-sev-support/Makefile
@@ -0,0 +1,21 @@
+DESTDIR=
+PREFIX=/usr
+SBINDIR=${PREFIX}/libexec/qemu-server
+SERVICEDIR=/lib/systemd/system
+
+CC ?= gcc
+CFLAGS += -O2 -fanalyzer -Werror -Wall -Wextra -Wpedantic -Wtype-limits 
-Wl,-z,relro -std=gnu11
+
+amd-sev-support: amd-sev-support.c
+   $(CC) $(CFLAGS) -o $@ $< $(LDFLAGS)
+
+.PHONY: install
+install: amd-sev-support
+   install -d ${DESTDIR}/${SBINDIR}
+   install -d ${DESTDIR}${SERVICEDIR}
+   install -m 0644 amd-sev-support.service ${DESTDIR}${SERVICEDIR}
+   install -m 0755 amd-sev-support ${DESTDIR}${SBINDIR}
+
+.PHONY: clean
+clean:
+   rm -f amd-sev-support
diff --git a/amd-sev-support/amd-sev-support.c 
b/amd-sev-support/amd-sev-support.c
new file mode 100644
index 000..73a7bd8
--- /dev/null
+++ b/amd-sev-support/amd-sev-support.c
@@ -0,0 +1,48 @@
+#include 
+#include 
+#include 
+
+int main() {
+uint32_t eax, ebx, ecx, edx;
+
+// query Encrypted Memory Capabilities, see:
+// 
https://en.wikipedia.org/wiki/CPUID#EAX=801Fh:_Encrypted_Memory_Capabilities
+uint32_t query_function = 0x801F;
+asm volatile("cpuid"
+: "=a"(eax), "=b"(ebx), "=c"(ecx), "=d"(edx)
+: "0"(query_function)
+);
+
+bool sev_support = (eax & (1<<1)) != 0;
+bool sev_es_support = (eax & (1<<3)) != 0;
+bool sev_snp_support = (eax & (1<<4)) != 0;
+
+uint8_t cbitpos = ebx & 0x3f;
+uint8_t reduced_phys_bits = (ebx >> 6) & 0x3f;
+
+FILE *file;
+char *filename = "/run/amd-sev-params";
+
+file = fopen(filename, "w");
+if (file == NULL) {
+   perror("Error opening file");
+   return 1;
+}
+
+fprintf(file, "{"
+   " \"cbitpos\": %u,"
+   " \"reduced-phys-bits\": %u,"
+   " \"sev\": %s,"
+   " \"sev-es\": %s,"
+   " \"sev-snp\": %s"
+   " }\n",
+   cbitpos,
+   reduced_phys_bits,
+   sev_support ? "true" : "false",
+   sev_es_support ? "true" : "false",
+   sev_snp_support ? "true" : "false"
+);
+
+fclose(file);
+return 0;
+}
diff --git a/amd-sev-support/amd-sev-support.service 
b/amd-sev-support/amd-sev-support.service
new file mode 100644
index 000..466dd0a
--- /dev/null
+++ b/amd-sev-support/amd-sev-support.service
@@ -0,0 +1,12 @@
+[Unit]
+Description=Read AMD SEV Parameters
+RequiresMountsFor=/run
+Before=pve-ha-lrm.service
+Before=pve-guests.service
+
+[Service]
+ExecStart=/usr/libexec/qemu-server/amd-sev-support
+Type=forking
+
+[Install]
+WantedBy=multi-user.target
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v5 2/3] config: QEMU AMD SEV enable

2024-04-19 Thread Markus Frank
This patch is for enabling AMD SEV (Secure Encrypted
Virtualization) support in QEMU

VM-Config-Examples:
amd_sev: type=std,nodbg=1,noks=1
amd_sev: es,nodbg=1,kernel-hashes=1

Node-Config-Example (gets generated automatically):
amd_sev: cbitpos=47,reduced-phys-bios=1

kernel-hashes, reduced-phys-bios & cbitpos correspond to the varibles
with the same name in qemu.

kernel-hashes=1 adds kernel-hashes to enable measured linux kernel
launch since it is per default off for backward compatibility.

reduced-phys-bios and cbitpos are system specific and are read out by
the amd-sev-support.service on boot and saved to the /run/amd-sev-params
file. This file is parsed and than used by qemu-server to correctly
start a AMD SEV VM.

type=std stands for standard sev to differentiate it from sev-es (es)
or sev-snp (snp) when support is upstream.

QEMU's sev-guest policy gets calculated with the parameters nodbg & noks
These parameters correspond to policy-bits 0 & 1.
If type is 'es' than policy-bit 2 gets set to 1 to activate SEV-ES.
Policy bit 3 (nosend) is always set to 1, because migration
features for sev are not upstream yet and are attackable.

SEV-ES is highly experimental since it could not be tested.

see coherent doc patch

Signed-off-by: Markus Frank 
---
v5:
* parse /run/amd-sev-params for hardware parameters
* removed NodeConfig dependency
* only disallow live-migration and snapshots with vmstate
  -> allow offline migration and snapshots without vmstate

v4:
* reduced lines of code
* added text that SEV-ES is experimental

 PVE/API2/Qemu.pm   |  10 +
 PVE/QemuMigrate.pm |   4 ++
 PVE/QemuServer.pm  | 102 +
 3 files changed, 116 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 497987f..b35ab69 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4490,6 +4490,10 @@ __PACKAGE__->register_method({
push $local_resources->@*, "clipboard=vnc";
}
 
+   if ($res->{running} && $vmconf->{amd_sev}) {
+   push $local_resources->@*, "amd_sev";
+   }
+
# if vm is not running, return target nodes where local storage/mapped 
devices are available
# for offline migration
if (!$res->{running}) {
@@ -5170,6 +5174,12 @@ __PACKAGE__->register_method({
die "unable to use snapshot name 'pending' (reserved name)\n"
if lc($snapname) eq 'pending';
 
+   my $conf = PVE::QemuConfig->load_config($vmid);
+   if ($param->{vmstate} && $conf->{amd_sev}) {
+   die "Snapshots that include memory are not supported while memory"
+   ." is encrypted by AMD SEV.\n"
+   }
+
my $realcmd = sub {
PVE::Cluster::log_msg('info', $authuser, "snapshot VM $vmid: 
$snapname");
PVE::QemuConfig->snapshot_create($vmid, $snapname, 
$param->{vmstate},
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 8d9b35a..7db18b2 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -260,6 +260,10 @@ sub prepare {
die "VMs with 'clipboard' set to 'vnc' are not live migratable!\n";
 }
 
+if ($running && $conf->{'amd_sev'}) {
+   die "VMs with AMD SEV are not live migratable!\n";
+}
+
 my $vollist = PVE::QemuServer::get_vm_volumes($conf);
 
 my $storages = {};
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6e2c805..e1ad190 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -185,6 +185,59 @@ my $agent_fmt = {
 },
 };
 
+my $sev_fmt = {
+type => {
+   description => "Enable standard SEV with type='std' or enable 
experimental SEV-ES"
+   ." with the 'es' option.",
+   type => 'string',
+   default_key => 1,
+   format_description => "qemu-sev-type",
+   enum => ['std', 'es'],
+   maxLength => 3,
+},
+nodbg => {
+   description => "Sets policy bit 0 to 1 to disallow debugging of guest",
+   type => 'boolean',
+   format_description => "qemu-sev-nodbg",
+   default => 0,
+   optional => 1,
+},
+noks => {
+   description => "Sets policy bit 1 to 1 to disallow key sharing with 
other guests",
+   type => 'boolean',
+   format_description => "qemu-sev-noks",
+   default => 0,
+   optional => 1,
+},
+"kernel-hashes" => {
+   description => "Add kernel hashes to guest firmware for measured linux 
kernel launch",
+   type => 'boolean',
+   format_description => "qemu-sev-kernel-hashes",
+   default => 0,
+   optional => 1,
+},
+};
+PVE::JSONSchema::register_format('pve-qemu-sev-fmt', $sev_fmt);
+
+my $sev_node_fmt = {
+cbitpos => {
+   description => "C-bit: marks if a memory page is protected. System 
dependent",
+   type => 'integer',
+   default => 47,
+   optional => 1,
+   minimum => 0,
+   maximum => 100,
+},
+'reduced-phys-bits' => {
+   description => "Number of bits the physical address space is reduced 
by. System dependent",
+   type 

Re: [pve-devel] [PATCH many 00/19] notifications: move template strings to template files; PBS preparations

2024-04-19 Thread Fiona Ebner
Am 09.04.24 um 15:25 schrieb Lukas Wagner:
> The notification system uses handlebar templates to render the subject
> and the body of notifications. Previously, the template strings were
> defined inline at the call site. This patch series extracts the templates
> into template files and installs them at
>   /usr/share/proxmox-ve/templates/default
> 
> where they stored as -{body,subject}.{txt,html}.hbs
> 
> The 'default' part in the path is a preparation for translated
> notifications and/or overridable notification templates.
> Future work could provide notifications translated to e.g. German
> in `templates/de` or similar. This will be a first for having
> translated strings on the backend-side, so there is need for further
> research.
> 
> The patches for `proxmox` also include some preparatory patches for
> the upcoming integration of the notification system into PBS. They
> are not needed for PVE, but I included them here since Folke and I
> tested the PVE changes with them applied. IMO they should just be
> applied with the same version bump.
> The list of optional, preparatory patches is:
>   notify: give each notification a unique ID
>   notify: api: add get_targets
>   notify: derive `api` for Deleteable*Property
>   notify: derive Deserialize/Serialize for Notification struct
>   notify: pbs context: include nodename in default sendmail author
>   notify: renderer: add relative-percentage helper from PBS
> 
> Folke kindly did some off-list testing before this was posted, hence
> his T-bs were included.
> 
> Bumps/dependencies:
>   - proxmox_notify
>   - libproxmox-rs-perl/libpve-rs-perl (needs bumped proxmox_notify)
>   - libpve-notify-perl  (needs bumped 
> libproxmox-rs-perl/libpve-rs-perl)
>   - pve-ha-manager (needs bumped libpve-notify-perl)
>   - pve-manager (needs bumped libpve-notify-perl)
>   - proxmox-mail-forward (optional. should not be affected by any 
> changes, but I think
> it should be also be bumped for any larger proxmox_notify changes to 
> avoid any
> weird hidden regressions due to mismatches of proxmox_notify
> 

Versioned breaks required:
- new pve-cluster will break old pve-manager and old pve-ha-manager
- new libproxmox-rs-perl/libpve-rs-perl will break old pve-cluster

Still not too sure about putting the templates in
/usr/share/proxmox-ve/, but maybe @Thomas or @Fabian can chime in on that.

While I'm rather unfamiliar with the code and not much into Rust,
everything looked good to me (just a few nits). So, with a grain of salt:

Reviewed-by: Fiona Ebner 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 19/19] notifications: use named templates instead of in-code templates

2024-04-19 Thread Fiona Ebner
Am 09.04.24 um 15:25 schrieb Lukas Wagner:
> diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
> index 152eb3e5..2ea626f0 100644
> --- a/PVE/VZDump.pm
> +++ b/PVE/VZDump.pm

The existing $subject_template and $body_template could be removed now
like for the others

> diff --git a/templates/Makefile b/templates/Makefile
> new file mode 100644
> index ..b0add81e
> --- /dev/null
> +++ b/templates/Makefile
> @@ -0,0 +1,24 @@
> +NOTIFICATION_TEMPLATES=  
> \
> + default/test-subject.txt.hbs\
> + default/test-body.txt.hbs   \
> + default/test-body.html.hbs  \
> + default/vzdump-subject.txt.hbs  \
> + default/vzdump-body.txt.hbs \
> + default/vzdump-body.html.hbs\
> + default/replication-subject.txt.hbs \
> + default/replication-body.txt.hbs\
> + default/replication-body.html.hbs   \
> + default/package-updates-subject.txt.hbs \
> + default/package-updates-body.txt.hbs\
> + default/package-updates-body.html.hbs   \
> +

Nit: backslashes are not aligned

> diff --git a/templates/default/replication-body.html.hbs 
> b/templates/default/replication-body.html.hbs
> new file mode 100644
> index ..d1dea6a1
> --- /dev/null
> +++ b/templates/default/replication-body.html.hbs
> @@ -0,0 +1,18 @@
> +
> +
> +Replication job '{{job-id}}' with target '{{job-target}}' and 
> schedule '{{job-schedule}}' failed!
> +
> +Last successful sync: {{timestamp last-sync}}
> +Next sync try: {{timestamp next-sync}}
> +Failure count: {{failure-count}}
> +
> +{{#if (eq failure-count 3)}}
> +Note: The system  will now reduce the frequency of error 
> reports, as the job appears to be stuck.
> +{{/if}}
> +
> +Error:
> +
> +{{error}}

Nit: is there a special reason for not indenting this?

> +
> +
> +


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v2 5/6] ui: guest import: add storage selector for ova extraction storage

2024-04-19 Thread Dominik Csapak
but only when we detect the 'ova-needs-extraction' warning.
This can be used to select the storage where the disks contained in an
OVA will be extracted to temporarily.

Signed-off-by: Dominik Csapak 
---
new in v2
 www/manager6/window/GuestImport.js | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/www/manager6/window/GuestImport.js 
b/www/manager6/window/GuestImport.js
index 2c8bc189..3b328f36 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -303,6 +303,7 @@ Ext.define('PVE.window.GuestImport', {
os: 'l26',
maxCdDrives: false,
uniqueMACAdresses: false,
+   isOva: false,
warnings: [],
},
 
@@ -432,6 +433,10 @@ Ext.define('PVE.window.GuestImport', {
}
}
 
+   if (config['import-extraction-storage'] === '') {
+   delete config['import-extraction-storage'];
+   }
+
return config;
},
 
@@ -553,6 +558,22 @@ Ext.define('PVE.window.GuestImport', {
allowBlank: false,
fieldLabel: gettext('Default Bridge'),
},
+   {
+   xtype: 'pveStorageSelector',
+   reference: 'extractionStorage',
+   fieldLabel: gettext('Extraction Storage'),
+   storageContent: 'images',
+   emptyText: gettext('Import Storage'),
+   autoSelect: false,
+   name: 'import-extraction-storage',
+   disabled: true,
+   hidden: true,
+   allowBlank: true,
+   bind: {
+   disabled: '{!isOva}',
+   hidden: '{!isOva}',
+   },
+   },
],
 
columnB: [
@@ -918,6 +939,7 @@ Ext.define('PVE.window.GuestImport', {
 
me.lookup('defaultStorage').setNodename(me.nodename);
me.lookup('defaultBridge').setNodename(me.nodename);
+   me.lookup('extractionStorage').setNodename(me.nodename);
 
let renderWarning = w => {
const warningsCatalogue = {
@@ -999,6 +1021,7 @@ Ext.define('PVE.window.GuestImport', {
}
 
me.getViewModel().set('warnings', data.warnings.map(w => 
renderWarning(w)));
+   me.getViewModel().set('isOva', data.warnings.map(w => 
w.type).indexOf('ova-needs-extracting') !== -1);
 
let osinfo = PVE.Utils.get_kvm_osinfo(me.vmConfig.ostype ?? '');
let prepareForVirtIO = (me.vmConfig.ostype ?? 
'').startsWith('w') && (me.vmConfig.bios ?? '').indexOf('ovmf') !== -1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2 4/4] api: create: add 'import-extraction-storage' parameter

2024-04-19 Thread Dominik Csapak
this is to override the target extraction storage for the option disk
extraction for 'import-from'. This way if the storage does not
supports the content type 'images', one can give an alternative  one.

Signed-off-by: Dominik Csapak 
---
new in v2
 PVE/API2/Qemu.pm | 56 +---
 1 file changed, 48 insertions(+), 8 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index d32967dc..74d0e240 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -128,7 +128,9 @@ my $check_drive_param = sub {
 };
 
 my $check_storage_access = sub {
-   my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
+   my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage, 
$extraction_storage) = @_;
+
+   my $needs_extraction = 0;
 
$foreach_volume_with_alloc->($settings, sub {
my ($ds, $drive) = @_;
@@ -169,9 +171,13 @@ my $check_storage_access = sub {
if $vtype ne 'images' && $vtype ne 'import';
 
if (PVE::GuestImport::copy_needs_extraction($src_image)) {
-   raise_param_exc({ $ds => "$src_image is not on an storage 
with 'images' content type."})
-   if !$scfg->{content}->{images};
-   $rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
+   $needs_extraction = 1;
+   if (!defined($extraction_storage)) {
+   raise_param_exc({ $ds => "$src_image is not on an 
storage with 'images'"
+   ." content type and no 'import-extraction-storage' 
was given."})
+   if !$scfg->{content}->{images};
+   $rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
+   }
}
}
 
@@ -183,6 +189,14 @@ my $check_storage_access = sub {
}
 });
 
+if ($needs_extraction && defined($extraction_storage)) {
+   my $scfg = PVE::Storage::storage_config($storecfg, $extraction_storage);
+   raise_param_exc({ 'import-extraction-storage' => "$extraction_storage 
does not support"
+   ." 'images' content type or is not file based."})
+   if !$scfg->{content}->{images} || !$scfg->{path};
+   $rpcenv->check($authuser, "/storage/$extraction_storage", 
['Datastore.AllocateSpace']);
+}
+
$rpcenv->check($authuser, "/storage/$settings->{vmstatestorage}", 
['Datastore.AllocateSpace'])
if defined($settings->{vmstatestorage});
 };
@@ -326,7 +340,7 @@ my $import_from_volid = sub {
 
 # Note: $pool is only needed when creating a VM, because pool permissions
 # are automatically inherited if VM already exists inside a pool.
-my sub create_disks : prototype($$) {
+my sub create_disks : prototype($$$) {
 my (
$rpcenv,
$authuser,
@@ -338,6 +352,7 @@ my sub create_disks : prototype($$) {
$settings,
$default_storage,
$is_live_import,
+   $extraction_storage,
 ) = @_;
 
 my $vollist = [];
@@ -405,7 +420,8 @@ my sub create_disks : prototype($$) {
if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed 
volume
if (PVE::GuestImport::copy_needs_extraction($source)) { # 
needs extraction beforehand
print "extracting $source\n";
-   $source = 
PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
+   $source = 
PVE::GuestImport::extract_disk_from_import_file(
+   $source, $vmid, $extraction_storage);
print "finished extracting to $source\n";
push @$delete_sources, $source;
}
@@ -925,6 +941,12 @@ __PACKAGE__->register_method({
default => 0,
description => "Start VM after it was created 
successfully.",
},
+   'import-extraction-storage' => 
get_standard_option('pve-storage-id', {
+   description => "Storage to put extracted images when using 
'import-from' that"
+   ." needs extraction",
+   optional => 1,
+   completion => \::QemuServer::complete_storage,
+   }),
},
1, # with_disk_alloc
),
@@ -951,6 +973,7 @@ __PACKAGE__->register_method({
my $storage = extract_param($param, 'storage');
my $unique = extract_param($param, 'unique');
my $live_restore = extract_param($param, 'live-restore');
+   my $extraction_storage = extract_param($param, 
'import-extraction-storage');
 
if (defined(my $ssh_keys = $param->{sshkeys})) {
$ssh_keys = URI::Escape::uri_unescape($ssh_keys);
@@ -1010,7 +1033,8 @@ __PACKAGE__->register_method({
if (scalar(keys $param->%*) > 0) {
&$resolve_cdrom_alias($param);
 
-  

[pve-devel] [PATCH storage v2 08/10] api: allow ova upload/download

2024-04-19 Thread Dominik Csapak
introducing a separate regex that only contains ova, since
upload/downloading ovfs does not make sense (since the disks are then
missing).

Signed-off-by: Dominik Csapak 
Reviewed-by: Fiona Ebner 
---
changes from v1:
* typo fixes
* added OVA mention to description
 src/PVE/API2/Storage/Status.pm | 18 ++
 src/PVE/Storage.pm | 11 +++
 2 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index acde730..6c0c1e5 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -369,7 +369,7 @@ __PACKAGE__->register_method ({
 name => 'upload',
 path => '{storage}/upload',
 method => 'POST',
-description => "Upload templates and ISO images.",
+description => "Upload templates, ISO images and OVAs.",
 permissions => {
check => ['perm', '/storage/{storage}', ['Datastore.AllocateTemplate']],
 },
@@ -382,7 +382,7 @@ __PACKAGE__->register_method ({
content => {
description => "Content type.",
type => 'string', format => 'pve-storage-content',
-   enum => ['iso', 'vztmpl'],
+   enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This 
will be normalized!",
@@ -448,6 +448,11 @@ __PACKAGE__->register_method ({
raise_param_exc({ filename => "wrong file extension" });
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $param->{storage});
+   } elsif ($content eq 'import') {
+   if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+   raise_param_exc({ filename => "wrong file extension" });
+   }
+   $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
} else {
raise_param_exc({ content => "upload content type '$content' not 
allowed" });
}
@@ -544,7 +549,7 @@ __PACKAGE__->register_method({
 name => 'download_url',
 path => '{storage}/download-url',
 method => 'POST',
-description => "Download templates and ISO images by using an URL.",
+description => "Download templates, ISO images and OVAs by using an URL.",
 proxyto => 'node',
 permissions => {
description => 'Requires allocation access on the storage and as this 
allows one to probe'
@@ -572,7 +577,7 @@ __PACKAGE__->register_method({
content => {
description => "Content type.", # TODO: could be optional & 
detected in most cases
type => 'string', format => 'pve-storage-content',
-   enum => ['iso', 'vztmpl'],
+   enum => ['iso', 'vztmpl', 'import'],
},
filename => {
description => "The name of the file to create. Caution: This 
will be normalized!",
@@ -642,6 +647,11 @@ __PACKAGE__->register_method({
raise_param_exc({ filename => "wrong file extension" });
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $storage);
+   } elsif ($content eq 'import') {
+   if ($filename !~ m![^/]+$PVE::Storage::UPLOAD_IMPORT_EXT_RE_1$!) {
+   raise_param_exc({ filename => "wrong file extension" });
+   }
+   $path = PVE::Storage::get_import_dir($cfg, $param->{storage});
} else {
raise_param_exc({ content => "upload content-type '$content' is not 
allowed" });
}
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 863dbdb..c628ebd 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -116,6 +116,8 @@ our $BACKUP_EXT_RE_2 = 
qr/\.(tgz|(?:tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPR
 
 our $IMPORT_EXT_RE_1 = qr/\.(ov[af])/;
 
+our $UPLOAD_IMPORT_EXT_RE_1 = qr/\.(ova)/;
+
 our $SAFE_CHAR_CLASS_RE = qr/[a-zA-Z0-9\-\.\+\=\_]/;
 
 # FIXME remove with PVE 8.0, add versioned breaks for pve-manager
@@ -464,6 +466,15 @@ sub get_iso_dir {
 return $plugin->get_subdir($scfg, 'iso');
 }
 
+sub get_import_dir {
+my ($cfg, $storeid) = @_;
+
+my $scfg = storage_config($cfg, $storeid);
+my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+return $plugin->get_subdir($scfg, 'import');
+}
+
 sub get_vztmpl_dir {
 my ($cfg, $storeid) = @_;
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v2 3/6] ui: enable import content type for relevant storages

2024-04-19 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
changes from v1:
* added import to glusterfs
 www/manager6/Utils.js| 1 +
 www/manager6/form/ContentTypeSelector.js | 2 +-
 www/manager6/storage/CephFSEdit.js   | 2 +-
 www/manager6/storage/GlusterFsEdit.js| 2 +-
 4 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 84d5eef1..9a4c3290 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -690,6 +690,7 @@ Ext.define('PVE.Utils', {
'iso': gettext('ISO image'),
'rootdir': gettext('Container'),
'snippets': gettext('Snippets'),
+   'import': gettext('Import'),
 },
 
 volume_is_qemu_backup: function(volid, format) {
diff --git a/www/manager6/form/ContentTypeSelector.js 
b/www/manager6/form/ContentTypeSelector.js
index d0fa0b08..431bd948 100644
--- a/www/manager6/form/ContentTypeSelector.js
+++ b/www/manager6/form/ContentTypeSelector.js
@@ -10,7 +10,7 @@ Ext.define('PVE.form.ContentTypeSelector', {
me.comboItems = [];
 
if (me.cts === undefined) {
-   me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 
'snippets'];
+   me.cts = ['images', 'iso', 'vztmpl', 'backup', 'rootdir', 
'snippets', 'import'];
}
 
Ext.Array.each(me.cts, function(ct) {
diff --git a/www/manager6/storage/CephFSEdit.js 
b/www/manager6/storage/CephFSEdit.js
index 6a95a00a..2cdcf7cd 100644
--- a/www/manager6/storage/CephFSEdit.js
+++ b/www/manager6/storage/CephFSEdit.js
@@ -92,7 +92,7 @@ Ext.define('PVE.storage.CephFSInputPanel', {
me.column2 = [
{
xtype: 'pveContentTypeSelector',
-   cts: ['backup', 'iso', 'vztmpl', 'snippets'],
+   cts: ['backup', 'iso', 'vztmpl', 'snippets', 'import'],
fieldLabel: gettext('Content'),
name: 'content',
value: 'backup',
diff --git a/www/manager6/storage/GlusterFsEdit.js 
b/www/manager6/storage/GlusterFsEdit.js
index 8155d9c2..df7fe23f 100644
--- a/www/manager6/storage/GlusterFsEdit.js
+++ b/www/manager6/storage/GlusterFsEdit.js
@@ -99,7 +99,7 @@ Ext.define('PVE.storage.GlusterFsInputPanel', {
},
{
xtype: 'pveContentTypeSelector',
-   cts: ['images', 'iso', 'backup', 'vztmpl', 'snippets'],
+   cts: ['images', 'iso', 'backup', 'vztmpl', 'snippets', 
'import'],
name: 'content',
value: 'images',
multiSelect: true,
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2 04/10] ovf: implement parsing the ostype

2024-04-19 Thread Dominik Csapak
use the standards info about the ostypes to map to our own
(see comment for link to the relevant part of the dmtf schema)

every type that is not listed we map to 'other', so no need to have it
in a list.

Signed-off-by: Dominik Csapak 
Reviewed-by: Fiona Ebner 
---
changes from v1:
* added comment for test cases why it's 'other'

 src/PVE/GuestImport/OVF.pm | 69 ++
 src/test/run_ovf_tests.pl  |  5 +++
 2 files changed, 74 insertions(+)

diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 6b79078..cf08cb6 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -55,6 +55,71 @@ my @resources = (
 { id => 35, dtmf_name => 'Vendor Reserved'}
 );
 
+# see https://schemas.dmtf.org/wbem/cim-html/2.55.0+/CIM_OperatingSystem.html
+my $ostype_ids = {
+18 => 'winxp', # 'WINNT',
+29 => 'solaris', # 'Solaris',
+36 => 'l26', # 'LINUX',
+58 => 'w2k', # 'Windows 2000',
+67 => 'wxp', #'Windows XP',
+69 => 'w2k3', # 'Microsoft Windows Server 2003',
+70 => 'w2k3', # 'Microsoft Windows Server 2003 64-Bit',
+71 => 'wxp', # 'Windows XP 64-Bit',
+72 => 'wxp', # 'Windows XP Embedded',
+73 => 'wvista', # 'Windows Vista',
+74 => 'wvista', # 'Windows Vista 64-Bit',
+75 => 'wxp', # 'Windows Embedded for Point of Service', ??
+76 => 'w2k8', # 'Microsoft Windows Server 2008',
+77 => 'w2k8', # 'Microsoft Windows Server 2008 64-Bit',
+79 => 'l26', # 'RedHat Enterprise Linux',
+80 => 'l26', # 'RedHat Enterprise Linux 64-Bit',
+81 => 'solaris', #'Solaris 64-Bit',
+82 => 'l26', # 'SUSE',
+83 => 'l26', # 'SUSE 64-Bit',
+84 => 'l26', # 'SLES',
+85 => 'l26', # 'SLES 64-Bit',
+87 => 'l26', # 'Novell Linux Desktop',
+89 => 'l26', # 'Mandriva',
+90 => 'l26', # 'Mandriva 64-Bit',
+91 => 'l26', # 'TurboLinux',
+92 => 'l26', # 'TurboLinux 64-Bit',
+93 => 'l26', # 'Ubuntu',
+94 => 'l26', # 'Ubuntu 64-Bit',
+95 => 'l26', # 'Debian',
+96 => 'l26', # 'Debian 64-Bit',
+97 => 'l24', # 'Linux 2.4.x',
+98 => 'l24', # 'Linux 2.4.x 64-Bit',
+99 => 'l26', # 'Linux 2.6.x',
+100 => 'l26', # 'Linux 2.6.x 64-Bit',
+101 => 'l26', # 'Linux 64-Bit',
+103 => 'win7', # 'Microsoft Windows Server 2008 R2',
+105 => 'win7', # 'Microsoft Windows 7',
+106 => 'l26', # 'CentOS 32-bit',
+107 => 'l26', # 'CentOS 64-bit',
+108 => 'l26', # 'Oracle Linux 32-bit',
+109 => 'l26', # 'Oracle Linux 64-bit',
+111 => 'win8', # 'Microsoft Windows Server 2011', ??
+112 => 'win8', # 'Microsoft Windows Server 2012',
+113 => 'win8', # 'Microsoft Windows 8',
+114 => 'win8', # 'Microsoft Windows 8 64-bit',
+115 => 'win8', # 'Microsoft Windows Server 2012 R2',
+116 => 'win10', # 'Microsoft Windows Server 2016',
+117 => 'win8', # 'Microsoft Windows 8.1',
+118 => 'win8', # 'Microsoft Windows 8.1 64-bit',
+119 => 'win10', # 'Microsoft Windows 10',
+120 => 'win10', # 'Microsoft Windows 10 64-bit',
+121 => 'win10', # 'Microsoft Windows Server 2019',
+122 => 'win11', # 'Microsoft Windows 11 64-bit',
+123 => 'win11', # 'Microsoft Windows Server 2022',
+# others => 'other',
+};
+
+sub get_ostype {
+my ($id) = @_;
+
+return $ostype_ids->{$id} // 'other';
+}
+
 sub find_by {
 my ($key, $param) = @_;
 foreach my $resource (@resources) {
@@ -160,6 +225,10 @@ sub parse_ovf {
 my 
$xpath_find_disks="/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${disk_id}]";
 my @disk_items = $xpc->findnodes($xpath_find_disks);
 
+my $xpath_find_ostype_id = 
"/ovf:Envelope/ovf:VirtualSystem/ovf:OperatingSystemSection/\@ovf:id";
+my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
+$qm->{ostype} = get_ostype($ostype_id);
+
 # disks metadata is split in four different xml elements:
 # * as an Item node of type DiskDrive in the VirtualHardwareSection
 # * as an Disk node in the DiskSection
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 5a80ab2..c433c9d 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -59,13 +59,18 @@ print "\ntesting vm.conf extraction\n";
 is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
 is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
 is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
+is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
 
 is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
 is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
 is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
+# older esxi/ovf standard used 'other' for windows10
+is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
 
 is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name 
is correct');
 is($win10noNs->{qm}->{memory}, '6144', 

[pve-devel] [PATCH manager v2 4/6] ui: enable upload/download/remove buttons for 'import' type storages

2024-04-19 Thread Dominik Csapak
but only for non esxi ones, since that does not allow
uploading/downloading there

Signed-off-by: Dominik Csapak 
---
changes from v1:
* show remove button for non-esxi
* order extensions alphabetically
* add missing '.' for ova extension

 www/manager6/storage/Browser.js| 9 +++--
 www/manager6/window/UploadToStorage.js | 1 +
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 2123141d..934ce706 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -28,7 +28,9 @@ Ext.define('PVE.storage.Browser', {
let res = storageInfo.data;
let plugin = res.plugintype;
 
-   me.items = plugin !== 'esxi' ? [
+   let isEsxi = plugin === 'esxi';
+
+   me.items = !isEsxi ? [
{
title: gettext('Summary'),
xtype: 'pveStorageSummary',
@@ -142,8 +144,11 @@ Ext.define('PVE.storage.Browser', {
iconCls: 'fa fa-desktop',
itemId: 'contentImport',
content: 'import',
-   useCustomRemoveButton: true, // hide default remove button
+   useCustomRemoveButton: isEsxi, // hide default remove 
button for esxi
showColumns: ['name', 'format'],
+   enableUploadButton: enableUpload && !isEsxi,
+   enableDownloadUrlButton: enableDownloadUrl && !isEsxi,
+   useUploadButton: !isEsxi,
itemdblclick: (view, record) => 
createGuestImportWindow(record),
tbar: [
{
diff --git a/www/manager6/window/UploadToStorage.js 
b/www/manager6/window/UploadToStorage.js
index 3c5bba88..cdf548a8 100644
--- a/www/manager6/window/UploadToStorage.js
+++ b/www/manager6/window/UploadToStorage.js
@@ -9,6 +9,7 @@ Ext.define('PVE.window.UploadToStorage', {
 title: gettext('Upload'),
 
 acceptedExtensions: {
+   'import': ['.ova'],
iso: ['.img', '.iso'],
vztmpl: ['.tar.gz', '.tar.xz', '.tar.zst'],
 },
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2 07/10] ovf: implement parsing nics

2024-04-19 Thread Dominik Csapak
by iterating over the relevant parts and trying to parse out the
'ResourceSubType'. The content of that is not standardized, but I only
ever found examples that are compatible with vmware, meaning it's
either 'e1000', 'e1000e' or 'vmxnet3' (in various capitalizations; thus
the `lc()`)

As a fallback i used e1000, since that is our default too, and should
work for most guest operating systems.

Signed-off-by: Dominik Csapak 
---
changes from v1:
* increase net_count
* fix grep usage
* now fallback to 'e1000' instead of 'vmxnet3'

 src/PVE/GuestImport/OVF.pm   | 23 ++-
 src/PVE/Storage/DirPlugin.pm |  2 +-
 src/test/run_ovf_tests.pl|  5 +
 3 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index f0609de..d7e3ce4 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -120,6 +120,12 @@ sub get_ostype {
 return $ostype_ids->{$id} // 'other';
 }
 
+my $allowed_nic_models = [
+'e1000',
+'e1000e',
+'vmxnet3',
+];
+
 sub find_by {
 my ($key, $param) = @_;
 foreach my $resource (@resources) {
@@ -356,7 +362,22 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", 
$controller_id);
 
 $qm->{boot} = "order=" . join(';', @$boot_order) if scalar(@$boot_order) > 
0;
 
-return {qm => $qm, disks => \@disks};
+my $nic_id = dtmf_name_to_id('Ethernet Adapter');
+my $xpath_find_nics = 
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/ovf:Item[rasd:ResourceType=${nic_id}]";
+my @nic_items = $xpc->findnodes($xpath_find_nics);
+
+my $net = {};
+
+my $net_count = 0;
+for my $item_node (@nic_items) {
+   my $model = $xpc->findvalue('rasd:ResourceSubType', $item_node);
+   $model = lc($model);
+   $model = 'e1000' if ! grep { $_ eq $model } @$allowed_nic_models;
+   $net->{"net${net_count}"} = { model => $model };
+   $net_count++;
+}
+
+return {qm => $qm, disks => \@disks, net => $net};
 }
 
 1;
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index aeffd44..147f6ca 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -293,7 +293,7 @@ sub get_import_metadata {
'create-args' => $res->{qm},
'disks' => $disks,
warnings => $warnings,
-   net => [],
+   net => $res->{net},
 };
 }
 
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index 3b04100..b8fa4b1 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -54,6 +54,11 @@ is($win10noNs->{disks}->[0]->{disk_address}, 'scsi0', 
'single disk vm (no defaul
 is($win10noNs->{disks}->[0]->{backing_file}, 
"$test_manifests/Win10-Liz-disk1.vmdk", 'single disk vm (no default rasd NS) 
has the correct disk backing device');
 is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single disk vm (no default 
rasd NS) has the correct size');
 
+print "testing nics\n";
+is($win2008->{net}->{net0}->{model}, 'e1000', 'win2008 has correct nic model');
+is($win10->{net}->{net0}->{model}, 'e1000e', 'win10 has correct nic model');
+is($win10noNs->{net}->{net0}->{model}, 'e1000e', 'win10 (no default rasd NS) 
has correct nic model');
+
 print "\ntesting vm.conf extraction\n";
 
 is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2 05/10] ovf: implement parsing out firmware type

2024-04-19 Thread Dominik Csapak
it seems there is no part of the ovf standard that handles which type of
bios there is (at least i could not find it). Every ovf/ova i tested
either has no info about it, or has it in a vmware specific property
which we parse here.

Signed-off-by: Dominik Csapak 
Reviewed-by: Fiona Ebner 
---
changes from v1:
* typo fix in commit message
 src/PVE/GuestImport/OVF.pm | 5 +
 src/PVE/Storage/DirPlugin.pm   | 5 +
 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf | 1 +
 src/test/run_ovf_tests.pl  | 1 +
 4 files changed, 12 insertions(+)

diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index cf08cb6..767590e 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -229,6 +229,11 @@ sub parse_ovf {
 my $ostype_id = $xpc->findvalue($xpath_find_ostype_id);
 $qm->{ostype} = get_ostype($ostype_id);
 
+# vmware specific firmware config, seems to not be standardized in ovf ?
+my $xpath_find_firmware = 
"/ovf:Envelope/ovf:VirtualSystem/ovf:VirtualHardwareSection/vmw:Config[\@vmw:key=\"firmware\"]/\@vmw:value";
+my $firmware = $xpc->findvalue($xpath_find_firmware) || 'seabios';
+$qm->{bios} = 'ovmf' if $firmware eq 'efi';
+
 # disks metadata is split in four different xml elements:
 # * as an Item node of type DiskDrive in the VirtualHardwareSection
 # * as an Disk node in the DiskSection
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 79a20bf..aeffd44 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -282,6 +282,11 @@ sub get_import_metadata {
};
 }
 
+if (defined($res->{qm}->{bios}) && $res->{qm}->{bios} eq 'ovmf') {
+   $disks->{efidisk0} = 1;
+   push @$warnings, { type => 'efi-state-lost', key => 'bios', value => 
'ovmf' };
+}
+
 return {
type => 'vm',
source => $volname,
diff --git a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf 
b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
index b93540f..10ccaf1 100755
--- a/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
+++ b/src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
@@ -137,6 +137,7 @@
   
   
   
+  
 
   
 
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index c433c9d..e92258d 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -72,5 +72,6 @@ is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default 
rasd NS) memory is
 is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are 
correct');
 # older esxi/ovf standard used 'other' for windows10
 is($win10noNs->{qm}->{ostype}, 'other', 'win10 VM (no default rasd NS) ostype 
is correct');
+is($win10noNs->{qm}->{bios}, 'ovmf', 'win10 VM (no default rasd NS) bios is 
correct');
 
 done_testing();
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2 06/10] ovf: implement rudimentary boot order

2024-04-19 Thread Dominik Csapak
simply add all parsed disks to the boot order in the order we encounter
them (similar to the esxi plugin).

Signed-off-by: Dominik Csapak 
---
changes from v1:
* renamed variable to boot_order
* fixed bracket issue (was a rebase problem)
* only add bootorder if list is not empty

 src/PVE/GuestImport/OVF.pm | 6 +-
 src/test/run_ovf_tests.pl  | 3 +++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
index 767590e..f0609de 100644
--- a/src/PVE/GuestImport/OVF.pm
+++ b/src/PVE/GuestImport/OVF.pm
@@ -245,6 +245,8 @@ sub parse_ovf {
 # when all the nodes has been found out, we copy the relevant information 
to
 # a $pve_disk hash ref, which we push to @disks;
 
+my $boot_order = [];
+
 foreach my $item_node (@disk_items) {
 
my $disk_node;
@@ -349,9 +351,11 @@ ovf:Item[rasd:InstanceID='%s']/rasd:ResourceType", 
$controller_id);
};
$pve_disk->{virtual_size} = $virtual_size if defined($virtual_size);
push @disks, $pve_disk;
-
+   push @$boot_order, $pve_disk_address;
 }
 
+$qm->{boot} = "order=" . join(';', @$boot_order) if scalar(@$boot_order) > 
0;
+
 return {qm => $qm, disks => \@disks};
 }
 
diff --git a/src/test/run_ovf_tests.pl b/src/test/run_ovf_tests.pl
index e92258d..3b04100 100755
--- a/src/test/run_ovf_tests.pl
+++ b/src/test/run_ovf_tests.pl
@@ -56,17 +56,20 @@ is($win10noNs->{disks}->[0]->{virtual_size}, 2048, 'single 
disk vm (no default r
 
 print "\ntesting vm.conf extraction\n";
 
+is($win2008->{qm}->{boot}, 'order=scsi0;scsi1', 'win2008 VM boot is correct');
 is($win2008->{qm}->{name}, 'Win2008-R2x64', 'win2008 VM name is correct');
 is($win2008->{qm}->{memory}, '2048', 'win2008 VM memory is correct');
 is($win2008->{qm}->{cores}, '1', 'win2008 VM cores are correct');
 is($win2008->{qm}->{ostype}, 'win7', 'win2008 VM ostype is correcty');
 
+is($win10->{qm}->{boot}, 'order=scsi0', 'win10 VM boot is correct');
 is($win10->{qm}->{name}, 'Win10-Liz', 'win10 VM name is correct');
 is($win10->{qm}->{memory}, '6144', 'win10 VM memory is correct');
 is($win10->{qm}->{cores}, '4', 'win10 VM cores are correct');
 # older esxi/ovf standard used 'other' for windows10
 is($win10->{qm}->{ostype}, 'other', 'win10 VM ostype is correct');
 
+is($win10noNs->{qm}->{boot}, 'order=scsi0', 'win10 VM (no default rasd NS) 
boot is correct');
 is($win10noNs->{qm}->{name}, 'Win10-Liz', 'win10 VM (no default rasd NS) name 
is correct');
 is($win10noNs->{qm}->{memory}, '6144', 'win10 VM (no default rasd NS) memory 
is correct');
 is($win10noNs->{qm}->{cores}, '4', 'win10 VM (no default rasd NS) cores are 
correct');
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2 10/10] add 'import' content type to 'check_volume_access'

2024-04-19 Thread Dominik Csapak
in the same branch as 'vztmpl' and 'iso'

Signed-off-by: Dominik Csapak 
---
new in v2
 src/PVE/Storage.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index c628ebd..7e70df2 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -540,7 +540,7 @@ sub check_volume_access {
 
return if $rpcenv->check($user, "/storage/$sid", 
['Datastore.Allocate'], 1);
 
-   if ($vtype eq 'iso' || $vtype eq 'vztmpl') {
+   if ($vtype eq 'iso' || $vtype eq 'vztmpl' || $vtype eq 'import') {
# require at least read access to storage, (custom) templates/ISOs 
could be sensitive
$rpcenv->check_any($user, "/storage/$sid", 
['Datastore.AllocateSpace', 'Datastore.Audit']);
} elsif (defined($ownervm) && defined($vmid) && ($ownervm == $vmid)) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2 2/4] use OVF from Storage

2024-04-19 Thread Dominik Csapak
and delete it here (incl tests; they live in pve-storage now).

Signed-off-by: Dominik Csapak 
Reviewed-by: Fiona Ebner 
---
changes from v1:
* use new module location (that makes the list now ordered as well :P )

 PVE/CLI/qm.pm |   4 +-
 PVE/QemuServer/Makefile   |   1 -
 PVE/QemuServer/OVF.pm | 242 --
 test/Makefile |   5 +-
 test/ovf_manifests/Win10-Liz-disk1.vmdk   | Bin 65536 -> 0 bytes
 test/ovf_manifests/Win10-Liz.ovf  | 142 --
 .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 --
 test/ovf_manifests/Win_2008_R2_two-disks.ovf  | 145 ---
 test/ovf_manifests/disk1.vmdk | Bin 65536 -> 0 bytes
 test/ovf_manifests/disk2.vmdk | Bin 65536 -> 0 bytes
 test/run_ovf_tests.pl |  71 -
 11 files changed, 3 insertions(+), 749 deletions(-)
 delete mode 100644 PVE/QemuServer/OVF.pm
 delete mode 100644 test/ovf_manifests/Win10-Liz-disk1.vmdk
 delete mode 100755 test/ovf_manifests/Win10-Liz.ovf
 delete mode 100755 test/ovf_manifests/Win10-Liz_no_default_ns.ovf
 delete mode 100755 test/ovf_manifests/Win_2008_R2_two-disks.ovf
 delete mode 100644 test/ovf_manifests/disk1.vmdk
 delete mode 100644 test/ovf_manifests/disk2.vmdk
 delete mode 100755 test/run_ovf_tests.pl

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b105830f..2b85d072 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -28,13 +28,13 @@ use PVE::Tools qw(extract_param file_get_contents);
 
 use PVE::API2::Qemu::Agent;
 use PVE::API2::Qemu;
+use PVE::GuestImport::OVF;
 use PVE::QemuConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Agent qw(agent_available);
 use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::Monitor qw(mon_cmd);
-use PVE::QemuServer::OVF;
 use PVE::QemuServer;
 
 use PVE::CLIHandler;
@@ -729,7 +729,7 @@ __PACKAGE__->register_method ({
my $storecfg = PVE::Storage::config();
PVE::Storage::storage_check_enabled($storecfg, $storeid);
 
-   my $parsed = PVE::QemuServer::OVF::parse_ovf($ovf_file);
+   my $parsed = PVE::GuestImport::OVF::parse_ovf($ovf_file);
 
if ($dryrun) {
print to_json($parsed, { pretty => 1, canonical => 1});
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index ac26e56f..89d12091 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -2,7 +2,6 @@ SOURCES=PCI.pm  \
USB.pm  \
Memory.pm   \
ImportDisk.pm   \
-   OVF.pm  \
Cloudinit.pm\
Agent.pm\
Helpers.pm  \
diff --git a/PVE/QemuServer/OVF.pm b/PVE/QemuServer/OVF.pm
deleted file mode 100644
index b97b0520..
--- a/PVE/QemuServer/OVF.pm
+++ /dev/null
@@ -1,242 +0,0 @@
-# Open Virtualization Format import routines
-# https://www.dmtf.org/standards/ovf
-package PVE::QemuServer::OVF;
-
-use strict;
-use warnings;
-
-use XML::LibXML;
-use File::Spec;
-use File::Basename;
-use Data::Dumper;
-use Cwd 'realpath';
-
-use PVE::Tools;
-use PVE::Storage;
-
-# map OVF resources types to descriptive strings
-# this will allow us to explore the xml tree without using magic numbers
-# 
http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
-my @resources = (
-{ id => 1, dtmf_name => 'Other' },
-{ id => 2, dtmf_name => 'Computer System' },
-{ id => 3, dtmf_name => 'Processor' },
-{ id => 4, dtmf_name => 'Memory' },
-{ id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
-{ id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
-{ id => 7, dtmf_name => 'FC HBA' },
-{ id => 8, dtmf_name => 'iSCSI HBA' },
-{ id => 9, dtmf_name => 'IB HCA' },
-{ id => 10, dtmf_name => 'Ethernet Adapter' },
-{ id => 11, dtmf_name => 'Other Network Adapter' },
-{ id => 12, dtmf_name => 'I/O Slot' },
-{ id => 13, dtmf_name => 'I/O Device' },
-{ id => 14, dtmf_name => 'Floppy Drive' },
-{ id => 15, dtmf_name => 'CD Drive' },
-{ id => 16, dtmf_name => 'DVD drive' },
-{ id => 17, dtmf_name => 'Disk Drive' },
-{ id => 18, dtmf_name => 'Tape Drive' },
-{ id => 19, dtmf_name => 'Storage Extent' },
-{ id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
-{ id => 21, dtmf_name => 'Serial port' },
-{ id => 22, dtmf_name => 'Parallel port' },
-{ id => 23, dtmf_name => 'USB Controller' },
-{ id => 24, dtmf_name => 'Graphics controller' },
-{ id => 25, dtmf_name => 'IEEE 1394 Controller' },
-{ id => 26, dtmf_name => 'Partitionable Unit' },
-{ id => 27, dtmf_name => 'Base Partitionable Unit' },
-{ id => 28, dtmf_name => 'Power' },
-{ id => 29, dtmf_name => 'Cooling Capacity' },
-{ id => 30, dtmf_name => 'Ethernet Switch Port' },
-{ id => 31, dtmf_name => 'Logical Disk' },
-{ id => 32, dtmf_name => 'Storage 

[pve-devel] [PATCH storage v2 01/10] copy OVF.pm from qemu-server

2024-04-19 Thread Dominik Csapak
copies the OVF.pm and relevant ovf tests from qemu-server.
We need it here, and it uses PVE::Storage already, and since there is no
intermediary package/repository we could put it, it seems fitting in
here.

Put it in a new GuestImport module

Signed-off-by: Dominik Csapak 
Reviewed-by: Fiona Ebner 
---
changes from v1:
* put into GuestImport instead of Storage
 src/PVE/GuestImport/Makefile  |   3 +
 src/PVE/GuestImport/OVF.pm| 242 ++
 src/PVE/Makefile  |   1 +
 src/PVE/Storage/Makefile  |   1 +
 src/test/Makefile |   5 +-
 src/test/ovf_manifests/Win10-Liz-disk1.vmdk   | Bin 0 -> 65536 bytes
 src/test/ovf_manifests/Win10-Liz.ovf  | 142 ++
 .../ovf_manifests/Win10-Liz_no_default_ns.ovf | 142 ++
 .../ovf_manifests/Win_2008_R2_two-disks.ovf   | 145 +++
 src/test/ovf_manifests/disk1.vmdk | Bin 0 -> 65536 bytes
 src/test/ovf_manifests/disk2.vmdk | Bin 0 -> 65536 bytes
 src/test/run_ovf_tests.pl |  71 +
 12 files changed, 751 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/GuestImport/Makefile
 create mode 100644 src/PVE/GuestImport/OVF.pm
 create mode 100644 src/test/ovf_manifests/Win10-Liz-disk1.vmdk
 create mode 100755 src/test/ovf_manifests/Win10-Liz.ovf
 create mode 100755 src/test/ovf_manifests/Win10-Liz_no_default_ns.ovf
 create mode 100755 src/test/ovf_manifests/Win_2008_R2_two-disks.ovf
 create mode 100644 src/test/ovf_manifests/disk1.vmdk
 create mode 100644 src/test/ovf_manifests/disk2.vmdk
 create mode 100755 src/test/run_ovf_tests.pl

diff --git a/src/PVE/GuestImport/Makefile b/src/PVE/GuestImport/Makefile
new file mode 100644
index 000..5948384
--- /dev/null
+++ b/src/PVE/GuestImport/Makefile
@@ -0,0 +1,3 @@
+.PHONY: install
+install:
+   install -D -m 0644 OVF.pm ${DESTDIR}${PERLDIR}/PVE/GuestImport/OVF.pm
diff --git a/src/PVE/GuestImport/OVF.pm b/src/PVE/GuestImport/OVF.pm
new file mode 100644
index 000..055ebf5
--- /dev/null
+++ b/src/PVE/GuestImport/OVF.pm
@@ -0,0 +1,242 @@
+# Open Virtualization Format import routines
+# https://www.dmtf.org/standards/ovf
+package PVE::GuestImport::OVF;
+
+use strict;
+use warnings;
+
+use XML::LibXML;
+use File::Spec;
+use File::Basename;
+use Data::Dumper;
+use Cwd 'realpath';
+
+use PVE::Tools;
+use PVE::Storage;
+
+# map OVF resources types to descriptive strings
+# this will allow us to explore the xml tree without using magic numbers
+# 
http://schemas.dmtf.org/wbem/cim-html/2/CIM_ResourceAllocationSettingData.html
+my @resources = (
+{ id => 1, dtmf_name => 'Other' },
+{ id => 2, dtmf_name => 'Computer System' },
+{ id => 3, dtmf_name => 'Processor' },
+{ id => 4, dtmf_name => 'Memory' },
+{ id => 5, dtmf_name => 'IDE Controller', pve_type => 'ide' },
+{ id => 6, dtmf_name => 'Parallel SCSI HBA', pve_type => 'scsi' },
+{ id => 7, dtmf_name => 'FC HBA' },
+{ id => 8, dtmf_name => 'iSCSI HBA' },
+{ id => 9, dtmf_name => 'IB HCA' },
+{ id => 10, dtmf_name => 'Ethernet Adapter' },
+{ id => 11, dtmf_name => 'Other Network Adapter' },
+{ id => 12, dtmf_name => 'I/O Slot' },
+{ id => 13, dtmf_name => 'I/O Device' },
+{ id => 14, dtmf_name => 'Floppy Drive' },
+{ id => 15, dtmf_name => 'CD Drive' },
+{ id => 16, dtmf_name => 'DVD drive' },
+{ id => 17, dtmf_name => 'Disk Drive' },
+{ id => 18, dtmf_name => 'Tape Drive' },
+{ id => 19, dtmf_name => 'Storage Extent' },
+{ id => 20, dtmf_name => 'Other storage device', pve_type => 'sata'},
+{ id => 21, dtmf_name => 'Serial port' },
+{ id => 22, dtmf_name => 'Parallel port' },
+{ id => 23, dtmf_name => 'USB Controller' },
+{ id => 24, dtmf_name => 'Graphics controller' },
+{ id => 25, dtmf_name => 'IEEE 1394 Controller' },
+{ id => 26, dtmf_name => 'Partitionable Unit' },
+{ id => 27, dtmf_name => 'Base Partitionable Unit' },
+{ id => 28, dtmf_name => 'Power' },
+{ id => 29, dtmf_name => 'Cooling Capacity' },
+{ id => 30, dtmf_name => 'Ethernet Switch Port' },
+{ id => 31, dtmf_name => 'Logical Disk' },
+{ id => 32, dtmf_name => 'Storage Volume' },
+{ id => 33, dtmf_name => 'Ethernet Connection' },
+{ id => 34, dtmf_name => 'DMTF reserved' },
+{ id => 35, dtmf_name => 'Vendor Reserved'}
+);
+
+sub find_by {
+my ($key, $param) = @_;
+foreach my $resource (@resources) {
+   if ($resource->{$key} eq $param) {
+   return ($resource);
+   }
+}
+return;
+}
+
+sub dtmf_name_to_id {
+my ($dtmf_name) = @_;
+my $found = find_by('dtmf_name', $dtmf_name);
+if ($found) {
+   return $found->{id};
+} else {
+   return;
+}
+}
+
+sub id_to_pve {
+my ($id) = @_;
+my $resource = find_by('id', $id);
+if ($resource) {
+   return $resource->{pve_type};
+} else {
+   return;
+}
+}
+
+# 

[pve-devel] [PATCH qemu-server v2 3/4] api: create: implement extracting disks when needed for import-from

2024-04-19 Thread Dominik Csapak
when 'import-from' contains a disk image that needs extraction
(currently only from an 'ova' archive), do that in 'create_disks'
and overwrite the '$source' volid.

Collect the names into a 'delete_sources' list, that we use later
to clean it up again (either when we're finished with importing or in an
error case).

Signed-off-by: Dominik Csapak 
---
changes from v1:
* put extraction into branch for managed volids
* adapt to new module location
* add checks for the impor storage if it support 'images'
  (otherwise we cannot extract there)
* add cleanup to other 'create_disks' method

 PVE/API2/Qemu.pm  | 44 ++-
 PVE/QemuServer.pm |  5 -
 PVE/QemuServer/Helpers.pm | 10 +
 3 files changed, 48 insertions(+), 11 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 2a349c8c..d32967dc 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -24,6 +24,7 @@ use PVE::JSONSchema qw(get_standard_option);
 use PVE::RESTHandler;
 use PVE::ReplicationConfig;
 use PVE::GuestHelpers qw(assert_tag_permissions);
+use PVE::GuestImport;
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::QemuServer::Cloudinit;
@@ -159,10 +160,19 @@ my $check_storage_access = sub {
 
if (my $src_image = $drive->{'import-from'}) {
my $src_vmid;
-   if (PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed 
volume
-   (my $vtype, undef, $src_vmid) = 
PVE::Storage::parse_volname($storecfg, $src_image);
-   raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - 
not an image" })
-   if $vtype ne 'images';
+   if (my ($storeid, $volname) = 
PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
+   my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+   my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+   (my $vtype, undef, $src_vmid) = 
$plugin->parse_volname($volname);
+
+   raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - 
needs to be 'images' or 'import'" })
+   if $vtype ne 'images' && $vtype ne 'import';
+
+   if (PVE::GuestImport::copy_needs_extraction($src_image)) {
+   raise_param_exc({ $ds => "$src_image is not on an storage 
with 'images' content type."})
+   if !$scfg->{content}->{images};
+   $rpcenv->check($authuser, "/storage/$storeid", 
['Datastore.AllocateSpace']);
+   }
}
 
if ($src_vmid) { # might be actively used by VM and will be copied 
via clone_disk()
@@ -335,6 +345,7 @@ my sub create_disks : prototype($$) {
 my $res = {};
 
 my $live_import_mapping = {};
+my $delete_sources = [];
 
 my $code = sub {
my ($ds, $disk) = @_;
@@ -392,6 +403,12 @@ my sub create_disks : prototype($$) {
$needs_creation = $live_import;
 
if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed 
volume
+   if (PVE::GuestImport::copy_needs_extraction($source)) { # 
needs extraction beforehand
+   print "extracting $source\n";
+   $source = 
PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
+   print "finished extracting to $source\n";
+   push @$delete_sources, $source;
+   }
if ($live_import && $ds ne 'efidisk0') {
my $path = PVE::Storage::path($storecfg, $source)
or die "failed to get a path for '$source'\n";
@@ -514,13 +531,14 @@ my sub create_disks : prototype($$) {
eval { PVE::Storage::vdisk_free($storecfg, $volid); };
warn $@ if $@;
}
+   PVE::QemuServer::Helpers::cleanup_extracted_images($delete_sources);
die $err;
 }
 
 # don't return empty import mappings
 $live_import_mapping = undef if !%$live_import_mapping;
 
-return ($vollist, $res, $live_import_mapping);
+return ($vollist, $res, $live_import_mapping, $delete_sources);
 };
 
 my $check_cpu_model_access = sub {
@@ -1079,6 +1097,7 @@ __PACKAGE__->register_method({
 
my $createfn = sub {
my $live_import_mapping = {};
+   my $delete_sources = [];
 
# ensure no old replication state are exists
PVE::ReplicationState::delete_guest_states($vmid);
@@ -1096,7 +1115,7 @@ __PACKAGE__->register_method({
 
my $vollist = [];
eval {
-   ($vollist, my $created_opts, $live_import_mapping) = 
create_disks(
+   ($vollist, my $created_opts, $live_import_mapping, 
$delete_sources) = create_disks(
$rpcenv,
$authuser,
$conf,
@@ -1149,6 +1168,7 @@ __PACKAGE__->register_method({
eval { 

[pve-devel] [PATCH manager v2 6/6] ui: guest import: change icon/text for non-esxi import storage

2024-04-19 Thread Dominik Csapak
since 'virtual guests' only make sense for a hypervisor, not e.g. a
directory for OVAs

also change the icon from 'desktop' to 'cloud-download' in the
non-esxi case

Signed-off-by: Dominik Csapak 
---
new in v2
 www/manager6/storage/Browser.js | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/www/manager6/storage/Browser.js b/www/manager6/storage/Browser.js
index 934ce706..01643c29 100644
--- a/www/manager6/storage/Browser.js
+++ b/www/manager6/storage/Browser.js
@@ -140,8 +140,10 @@ Ext.define('PVE.storage.Browser', {
};
me.items.push({
xtype: 'pveStorageContentView',
-   title: gettext('Virtual Guests'),
-   iconCls: 'fa fa-desktop',
+   // each gettext needs to be in a separate line
+   title: isEsxi ? gettext('Virtual Guests')
+   : gettext('Import'),
+   iconCls: isEsxi ? 'fa fa-desktop' : 'fa fa-cloud-download',
itemId: 'contentImport',
content: 'import',
useCustomRemoveButton: isEsxi, // hide default remove 
button for esxi
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2 1/4] api: delete unused OVF.pm

2024-04-19 Thread Dominik Csapak
the api part was never in use by anything

Signed-off-by: Dominik Csapak 
Reviewed-by: Fiona Ebner 
---
no changes in v2

 PVE/API2/Qemu/Makefile |  2 +-
 PVE/API2/Qemu/OVF.pm   | 53 --
 2 files changed, 1 insertion(+), 54 deletions(-)
 delete mode 100644 PVE/API2/Qemu/OVF.pm

diff --git a/PVE/API2/Qemu/Makefile b/PVE/API2/Qemu/Makefile
index bdd4762b..5d4abda6 100644
--- a/PVE/API2/Qemu/Makefile
+++ b/PVE/API2/Qemu/Makefile
@@ -1,4 +1,4 @@
-SOURCES=Agent.pm CPU.pm Machine.pm OVF.pm
+SOURCES=Agent.pm CPU.pm Machine.pm
 
 .PHONY: install
 install:
diff --git a/PVE/API2/Qemu/OVF.pm b/PVE/API2/Qemu/OVF.pm
deleted file mode 100644
index cc0ef2da..
--- a/PVE/API2/Qemu/OVF.pm
+++ /dev/null
@@ -1,53 +0,0 @@
-package PVE::API2::Qemu::OVF;
-
-use strict;
-use warnings;
-
-use PVE::JSONSchema qw(get_standard_option);
-use PVE::QemuServer::OVF;
-use PVE::RESTHandler;
-
-use base qw(PVE::RESTHandler);
-
-__PACKAGE__->register_method ({
-name => 'readovf',
-path => '',
-method => 'GET',
-proxyto => 'node',
-description => "Read an .ovf manifest.",
-protected => 1,
-parameters => {
-   additionalProperties => 0,
-   properties => {
-   node => get_standard_option('pve-node'),
-   manifest => {
-   description => "Path to .ovf manifest.",
-   type => 'string',
-   },
-   },
-},
-returns => {
-   type => 'object',
-   additionalProperties => 1,
-   properties => PVE::QemuServer::json_ovf_properties(),
-   description => "VM config according to .ovf manifest.",
-},
-code => sub {
-   my ($param) = @_;
-
-   my $manifest = $param->{manifest};
-   die "check for file $manifest failed - $!\n" if !-f $manifest;
-
-   my $parsed = PVE::QemuServer::OVF::parse_ovf($manifest);
-   my $result;
-   $result->{cores} = $parsed->{qm}->{cores};
-   $result->{name} =  $parsed->{qm}->{name};
-   $result->{memory} = $parsed->{qm}->{memory};
-   my $disks = $parsed->{disks};
-   for my $disk (@$disks) {
-   $result->{$disk->{disk_address}} = $disk->{backing_file};
-   }
-   return $result;
-}});
-
-1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2 09/10] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs

2024-04-19 Thread Dominik Csapak
and reuse the DirPlugin implementation

Signed-off-by: Dominik Csapak 
Reviewed-by: Fiona Ebner 
---
changes from v1:
* added glusterfs too
* fixed typo in commit subject

 src/PVE/Storage/BTRFSPlugin.pm | 5 +
 src/PVE/Storage/CIFSPlugin.pm  | 6 +-
 src/PVE/Storage/CephFSPlugin.pm| 6 +-
 src/PVE/Storage/GlusterfsPlugin.pm | 6 +-
 src/PVE/Storage/NFSPlugin.pm   | 6 +-
 5 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index 42815cb..b7e3f82 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -40,6 +40,7 @@ sub plugindata {
backup => 1,
snippets => 1,
none => 1,
+   import => 1,
},
{ images => 1, rootdir => 1 },
],
@@ -930,4 +931,8 @@ sub volume_import {
 return "$storeid:$volname";
 }
 
+sub get_import_metadata {
+return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1
diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index 2184471..475065a 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -99,7 +99,7 @@ sub type {
 sub plugindata {
 return {
content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
-  backup => 1, snippets => 1}, { images => 1 }],
+  backup => 1, snippets => 1, import => 1}, { images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
 };
 }
@@ -314,4 +314,8 @@ sub update_volume_attribute {
 return PVE::Storage::DirPlugin::update_volume_attribute(@_);
 }
 
+sub get_import_metadata {
+return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1;
diff --git a/src/PVE/Storage/CephFSPlugin.pm b/src/PVE/Storage/CephFSPlugin.pm
index 8aad518..36c64ea 100644
--- a/src/PVE/Storage/CephFSPlugin.pm
+++ b/src/PVE/Storage/CephFSPlugin.pm
@@ -116,7 +116,7 @@ sub type {
 
 sub plugindata {
 return {
-   content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1},
+   content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, 
import => 1 },
 { backup => 1 }],
 };
 }
@@ -261,4 +261,8 @@ sub update_volume_attribute {
 return PVE::Storage::DirPlugin::update_volume_attribute(@_);
 }
 
+sub get_import_metadata {
+return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1;
diff --git a/src/PVE/Storage/GlusterfsPlugin.pm 
b/src/PVE/Storage/GlusterfsPlugin.pm
index 2b7f9e1..9d17180 100644
--- a/src/PVE/Storage/GlusterfsPlugin.pm
+++ b/src/PVE/Storage/GlusterfsPlugin.pm
@@ -97,7 +97,7 @@ sub type {
 
 sub plugindata {
 return {
-   content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, 
snippets => 1},
+   content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, 
snippets => 1, import => 1},
 { images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
 };
@@ -352,4 +352,8 @@ sub check_connection {
 return defined($server) ? 1 : 0;
 }
 
+sub get_import_metadata {
+return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1;
diff --git a/src/PVE/Storage/NFSPlugin.pm b/src/PVE/Storage/NFSPlugin.pm
index f2e4c0d..72e9c6d 100644
--- a/src/PVE/Storage/NFSPlugin.pm
+++ b/src/PVE/Storage/NFSPlugin.pm
@@ -53,7 +53,7 @@ sub type {
 
 sub plugindata {
 return {
-   content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup 
=> 1, snippets => 1 },
+   content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup 
=> 1, snippets => 1, import => 1 },
 { images => 1 }],
format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
 };
@@ -223,4 +223,8 @@ sub update_volume_attribute {
 return PVE::Storage::DirPlugin::update_volume_attribute(@_);
 }
 
+sub get_import_metadata {
+return PVE::Storage::DirPlugin::get_import_metadata(@_);
+}
+
 1;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager v2 2/6] ui: guest import: add ova-needs-extracting warning text

2024-04-19 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
changes from v1:
* adapted text for new 'extract-stroage'
 www/manager6/window/GuestImport.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/window/GuestImport.js 
b/www/manager6/window/GuestImport.js
index 944d275b..2c8bc189 100644
--- a/www/manager6/window/GuestImport.js
+++ b/www/manager6/window/GuestImport.js
@@ -930,6 +930,7 @@ Ext.define('PVE.window.GuestImport', {
gettext('EFI state cannot be imported, you may need to 
reconfigure the boot order (see {0})'),
'https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries;>OVMF/UEFI Boot 
Entries',
),
+   'ova-needs-extracting': gettext('Importing from an OVA requires 
extra space while extracting the contained disks into the import or selected 
storage.'),
};
 let message = warningsCatalogue[w.type];
if (!w.type || !message) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH storage v2 03/10] plugin: dir: handle ova files for import

2024-04-19 Thread Dominik Csapak
since we want to handle ova files (which are only ovf+images bundled in
a tar file) for import, add code that handles that.

we introduce a valid volname for files contained in ovas like this:

 storage:import/archive.ova/disk-1.vmdk

by basically treating the last part of the path as the name for the
contained disk we want.

in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
(we cannot use something like 'ova+vmdk' without extending the 'format'
parsing to that for all storages/formats. This is because it runs
though a verify format check at least once)

we then provide 3 functions to use for that:

* copy_needs_extraction: determines from the given volid (like above) if
  that needs extraction to copy it, currently only 'import' vtype + a
  volid with the above format returns true

* extract_disk_from_import_file: this actually extracts the file from
  the archive. Currently only ova is supported, so the extraction with
  'tar' is hardcoded, but again we can easily extend/modify that should
  we need to.

  we currently extract into the either the import storage or a given
  target storage in the images directory so if the cleanup does not
  happen, the user can still see and interact with the image via
  api/cli/gui

* cleanup_extracted_image: intended to cleanup the extracted images from
  above

we have to modify the `parse_ovf` a bit to handle the missing disk
images, and we parse the size out of the ovf part (since this is
informal only, it should be no problem if we cannot parse it sometimes)

Signed-off-by: Dominik Csapak 
---
changes from v1:
* put code into GuestImport module
* check for extraction need with regex instead of relying on
  fileformat/vtype
* rework entire extraction code + cleanup, now allows to give a target
  storage, extract into 'images' dir of a file based storage, so
  we can return a valid volid for that storage instead of an absolute
  path

 src/PVE/API2/Storage/Status.pm |  1 +
 src/PVE/GuestImport.pm | 99 ++
 src/PVE/GuestImport/OVF.pm | 53 +++---
 src/PVE/Makefile   |  1 +
 src/PVE/Storage/DirPlugin.pm   | 15 +-
 src/PVE/Storage/Plugin.pm  |  5 ++
 src/test/parse_volname_test.pm | 15 ++
 7 files changed, 180 insertions(+), 9 deletions(-)
 create mode 100644 src/PVE/GuestImport.pm

diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index dc6cc69..acde730 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -749,6 +749,7 @@ __PACKAGE__->register_method({
'efi-state-lost',
'guest-is-running',
'nvme-unsupported',
+   'ova-needs-extracting',
'ovmf-with-lsi-unsupported',
'serial-port-socket-only',
],
diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
new file mode 100644
index 000..1c4065c
--- /dev/null
+++ b/src/PVE/GuestImport.pm
@@ -0,0 +1,99 @@
+package PVE::GuestImport;
+
+use strict;
+use warnings;
+
+use File::Path;
+
+use PVE::Storage;
+use PVE::Tools qw(run_command);
+
+sub copy_needs_extraction {
+my ($volid) = @_;
+my $cfg = PVE::Storage::config();
+my ($vtype, $name) = PVE::Storage::parse_volname($cfg, $volid);
+
+# only volumes inside ovas need extraction
+return $vtype eq 'import' && $name =~ m!\.ova/.+$!;
+}
+
+sub extract_disk_from_import_file {
+my ($volid, $vmid, $target_storeid) = @_;
+
+my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+$target_storeid //= $source_storeid;
+my $cfg = PVE::Storage::config();
+my $source_scfg = PVE::Storage::storage_config($cfg, $source_storeid);
+my $source_plugin = PVE::Storage::Plugin->lookup($source_scfg->{type});
+
+my ($vtype, $name, undef, undef, undef, undef, $fmt) =
+   $source_plugin->parse_volname($volname);
+
+die "only files with content type 'import' can be extracted\n"
+   if $vtype ne 'import' || !defined($fmt);
+
+# extract the inner file from the name
+my $archive;
+my $inner_file;
+if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
+   $archive = "import/$1";
+   $inner_file = $2;
+} else {
+   die "cannot extract $volid - invalid volname $volname\n";
+}
+
+my ($ova_path) = $source_plugin->path($source_scfg, $archive, 
$source_storeid);
+
+my $target_scfg = PVE::Storage::storage_config($cfg, $target_storeid);
+my $target_plugin = PVE::Storage::Plugin->lookup($target_scfg->{type});
+
+my $destdir = $target_plugin->get_subdir($target_scfg, 'images');
+
+my $pid = $$;
+$destdir .= "/tmp_${pid}_${vmid}";
+mkpath $destdir;
+
+($ova_path) = $ova_path =~ m|^(.*)$|; # untaint
+
+my $source_path = "$destdir/$inner_file";
+my $target_path;
+my 

  1   2   >