[pve-devel] [PATCH container] fix #6573: allow userns creation when nesting is enabled

2025-07-29 Thread Wolfgang Bumiller
This is another difference with the apparmor 4.0 userspace. We need to explicitly enable user namespaces in the generated profile - at least when nesting is enabled. Signed-off-by: Wolfgang Bumiller --- src/PVE/LXC.pm | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/PVE/LXC.pm b/src/PV

[pve-devel] [PATCH pve-manager v1 2/2] fix #4929: ui: add hidden storage support for iSCSI

2025-07-29 Thread nansen.su
This patch implements the UI components for the hidden storage feature: - Add hidden checkbox to iSCSI storage edit dialog - Filter hidden storages from resource tree display - Include hidden property in storage API response - Show hidden status in storage management view This is part

[pve-devel] [PATCH pve-storage v1 1/2] fix #4929: iscsi: add hidden option to hide storage from UI

2025-07-29 Thread nansen.su
This patch adds a 'hidden' boolean option to the iSCSI storage plugin. When enabled, it allows hiding the storage from the resource tree in the web interface while keeping it functional for actual storage operations. This is part 1 of 2 patches that together implement hidden storage supp

[pve-devel] applied: [PATCH ha-manager] watchdog-mux: Restore if guard for watchdog updates

2025-07-29 Thread Thomas Lamprecht
On Wed, 23 Jul 2025 12:54:37 +0200, Maximiliano Sandoval wrote: > In commit c26b474 the if condition was removed in favor of the if guard > on the higher level. However, this introduced functional change, the > watchdog now updates immediately after update_watchdog is set to 0, > potentially causin

[pve-devel] applied: [PATCH pve-manager v2 2/2] network-interface-pinning: allow arbitrary names

2025-07-29 Thread Thomas Lamprecht
On Tue, 29 Jul 2025 19:16:45 +0200, Stefan Hanreich wrote: > With the changes to physical interface detection in pve-common and > pve-manager, it is now possible to use arbitrary names for physical > interfaces in our network stack. This allows the removal of the > existing, hardcoded, prefixes. >

[pve-devel] applied: [PATCH pve-common v2 1/1] inotify/interfaces: use ip link for detecting physical interfaces

2025-07-29 Thread Thomas Lamprecht
On Tue, 29 Jul 2025 19:16:43 +0200, Stefan Hanreich wrote: > The parser for /e/n/i relied on PHYSICAL_NIC_RE for detecting physical > interfaces. In order to allow arbitrary interface names for pinning > physical interfaces, switch over to detecting physical interfaces via > 'ip link' instead. > >

[pve-devel] applied: [PATCH pve-manager v2 1/2] pvestatd: pull metric: use ip link to detect physical interfaces

2025-07-29 Thread Thomas Lamprecht
On Tue, 29 Jul 2025 19:16:44 +0200, Stefan Hanreich wrote: > pve-common now allows arbitrary names for physical interfaces, without > being restricted by PHYSICAL_NIC_RE. In order to detect physical > interfaces, pvestatd now needs to query 'ip link' for the type of an > interface instead of relyin

[pve-devel] applied: [PATCH docs 1/1] pveproxy, pvedaemon: document MAX_WORKERS setting

2025-07-29 Thread Thomas Lamprecht
On Tue, 29 Jul 2025 17:50:58 +0200, Friedrich Weber wrote: > For pveproxy, add it to the description of settings that can be > adjusted in /etc/default/pveproxy. > > For pvedaemon, this is currently the only setting that can be adjusted > in /etc/default/pvedaemon. > > > [...] Applied, thanks!

[pve-devel] applied: [PATCH http-server 1/1] api server: proxy config: read MAX_WORKERS integer key

2025-07-29 Thread Thomas Lamprecht
On Tue, 29 Jul 2025 17:50:55 +0200, Friedrich Weber wrote: > Read the MAX_WORKERS value in /etc/default/. If it is not > an integer between 0 and 128, ignore and warn. > > The lower limit was chosen because at least one worker process must > exist. The upper limit was chosen because more than 127

[pve-devel] applied: [PATCH manager 1/2] partially fix #5392: pveproxy: make number of workers configurable

2025-07-29 Thread Thomas Lamprecht
On Tue, 29 Jul 2025 17:50:56 +0200, Friedrich Weber wrote: > The number of pveproxy worker processes is currently hardcoded to 3. > This may not be enough for automation-heavy workloads that trigger a > lot of API requests that are synchronously handled by pveproxy. > > Hence, allow specifying MAX

[pve-devel] applied: [PATCH manager 2/2] partially fix #5392: pvedaemon: make number of workers configurable

2025-07-29 Thread Thomas Lamprecht
On Tue, 29 Jul 2025 17:50:57 +0200, Friedrich Weber wrote: > The number of pvedaemon worker processes is currently hardcoded to 3. > This may not be enough for automation-heavy workloads that trigger a > lot of API requests that are synchronously handled by pvedaemon. > > Hence, read /etc/default/

[pve-devel] [PATCH ha-manager v4 08/19] test: ha tester: add test cases for future node affinity rules

2025-07-29 Thread Daniel Kral
Add test cases to verify that the node affinity rules, which will be added in a following patch, are functionally equivalent to the existing HA groups. These test cases verify the following scenarios for (a) unrestricted and (b) restricted groups (i.e. non-strict and strict node affinity rules):

[pve-devel] [PATCH ha-manager v4 19/19] manager: persistently migrate ha groups to ha rules

2025-07-29 Thread Daniel Kral
Migrate the HA groups config to the HA resources and HA rules config persistently on disk and retry until it succeeds. The HA group config is already migrated in the HA Manager in-memory, but to persistently use them as HA node affinity rules, they must be migrated to the HA rules config. As the n

[pve-devel] [PATCH ha-manager v4 17/19] test: ha tester: replace any reference to groups with node affinity rules

2025-07-29 Thread Daniel Kral
As these test cases do work with node affinity rules now, correctly replace references to unrestricted/restricted groups with non-strict/strict node affinity rules and also replace "nofailback" with "disabled failback". Signed-off-by: Daniel Kral --- src/test/test-crs-static2/README

[pve-devel] [PATCH docs v4 1/2] ha: add documentation about ha rules and ha node affinity rules

2025-07-29 Thread Daniel Kral
Add documentation about HA Node Affinity rules and general documentation what HA rules are for in a format that is extendable with other HA rule types in the future. Signed-off-by: Daniel Kral append to ha intro Signed-off-by: Daniel Kral --- Makefile | 2 + gen-ha

[pve-devel] [PATCH ha-manager v4 16/19] test: ha tester: migrate groups to service and rules config

2025-07-29 Thread Daniel Kral
This is done, because in an upcoming patch, which persistently migrates HA groups to node affinity rules, it would make all these test cases try to migrate the HA groups config to the service and rules config. As this is not the responsibility of these test cases and HA groups become deprecated any

[pve-devel] [PATCH manager v4 4/4] ui: ha: replace ha groups with ha node affinity rules

2025-07-29 Thread Daniel Kral
Introduce HA rules and replace the existing HA groups with the new HA node affinity rules in the web interface. The HA rules components are designed to be extendible for other new rule types and allow users to display the errors of contradictory HA rules, if there are any, in addition to the other

[pve-devel] [PATCH ha-manager v4 11/19] manager: apply node affinity rules when selecting service nodes

2025-07-29 Thread Daniel Kral
Replace the HA group mechanism with the functionally equivalent node affinity rules' get_node_affinity(...), which enforces the node affinity rules defined in the rules config. This allows the $groups parameter to be replaced with the $rules parameter in select_service_node(...) as all behavior of

[pve-devel] [PATCH docs/ha-manager/manager v4 00/25] HA Rules

2025-07-29 Thread Daniel Kral
Here's a quick update on the core HA rules series. This cleans up the series so that all tests are running again and includes the missing ui patch that I didn't see missing last time. The persistent migration path has been tested for at least four full upgrade runs now, always with one node being

[pve-devel] [PATCH docs v4 2/2] ha: crs: add effects of ha node affinity rule on the crs scheduler

2025-07-29 Thread Daniel Kral
Add information about the effects that HA rules and HA Node Affinity rules have on the CRS scheduler and what can be expected by a user if they do changes to them. Signed-off-by: Daniel Kral --- ha-manager.adoc | 10 ++ 1 file changed, 10 insertions(+) diff --git a/ha-manager.adoc b/ha-

[pve-devel] [PATCH manager v4 2/4] ui: ha: remove ha groups from ha resource components

2025-07-29 Thread Daniel Kral
Remove the HA group column from the HA Resources grid view and the HA group selector from the HA Resources edit window, as these will be replaced by semantically equivalent HA node affinity rules in the next patch. Add the field 'failback' that is moved to the HA Resources config as part of the mi

[pve-devel] [PATCH ha-manager v4 09/19] resources: introduce failback property in ha resource config

2025-07-29 Thread Daniel Kral
Add the failback property in the HA resources config, which is functionally equivalent to the negation of the HA group's nofailback property. It will be used to migrate HA groups to HA node affinity rules. The 'failback' flag is set to be enabled by default as the HA group's nofailback property wa

[pve-devel] [PATCH ha-manager v4 04/19] rules: introduce node affinity rule plugin

2025-07-29 Thread Daniel Kral
Introduce the node affinity rule plugin to allow users to specify node affinity constraints for independent HA resources. Node affinity rules must specify one or more HA resources, one or more nodes with optional priorities (the default is 0), and a strictness, which is either * 0 (non-strict):

[pve-devel] [PATCH ha-manager v4 07/19] manager: read and update rules config

2025-07-29 Thread Daniel Kral
Read the rules configuration in each round and update the canonicalized rules configuration if there were any changes since the last round to reduce the amount of times of verifying the rule set. Signed-off-by: Daniel Kral --- src/PVE/HA/Manager.pm | 20 +++- 1 file changed, 19 i

[pve-devel] [PATCH ha-manager v4 02/19] manager: improve signature of select_service_node

2025-07-29 Thread Daniel Kral
As the signature of select_service_node(...) has become rather long already, make it more compact by retrieving service- and affinity-related data directly from the service state in $sd and introduce a $node_preference parameter to distinguish the behaviors of $try_next and $best_scored, which have

[pve-devel] [PATCH ha-manager v4 12/19] test: add test cases for rules config

2025-07-29 Thread Daniel Kral
Add test cases to verify that the rule checkers correctly identify and remove HA rules from the rules to make the rule set feasible. For now, there only are HA Node Affinity rules, which verify: - Node Affinity rules retrieve the correct optional default values - Node Affinity rules, which specify

[pve-devel] [PATCH ha-manager v4 13/19] api: introduce ha rules api endpoints

2025-07-29 Thread Daniel Kral
Add CRUD API endpoints for HA rules, which assert whether the given properties for the rules are valid and will not make the existing rule set infeasible. Disallowing changes to the rule set via the API, which would make this and other rules infeasible, makes it safer for users of the HA Manager t

[pve-devel] [PATCH manager v4 1/4] api: ha: add ha rules api endpoints

2025-07-29 Thread Daniel Kral
Signed-off-by: Daniel Kral --- PVE/API2/HAConfig.pm | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/PVE/API2/HAConfig.pm b/PVE/API2/HAConfig.pm index 35f49cbb..d29211fb 100644 --- a/PVE/API2/HAConfig.pm +++ b/PVE/API2/HAConfig.pm @@ -12,6 +12,7 @@ use PVE::JSONSchema qw

[pve-devel] [PATCH ha-manager v4 03/19] introduce rules base plugin

2025-07-29 Thread Daniel Kral
Add a rules base plugin to allow users to specify different kinds of HA rules in a single configuration file, which put constraints on the HA Manager's behavior. Signed-off-by: Daniel Kral --- debian/pve-ha-manager.install | 1 + src/PVE/HA/Makefile | 2 +- src/PVE/HA/Rules.pm

[pve-devel] [PATCH ha-manager v4 15/19] sim: do not create default groups for test cases

2025-07-29 Thread Daniel Kral
As none of the existing HA test cases rely on the default HA groups created by the simulated hardware anymore, create them only for the ha-simulator hardware. This is done, because in an upcoming patch, which persistently migrates HA groups to node affinity rules, it would unnecessarily fire the m

[pve-devel] [PATCH ha-manager v4 18/19] env: add property delete for update_service_config

2025-07-29 Thread Daniel Kral
Allow callees of update_service_config(...) to provide properties, which should be deleted from a HA resource config. This is needed for the migration of HA groups, as the 'group' property must be removed to completely migrate these to the respective HA resource configs. Otherwise, these groups wo

[pve-devel] [PATCH manager v4 3/4] ui: ha: show failback flag in resources status view

2025-07-29 Thread Daniel Kral
As the HA groups' failback flag is now being part of the HA resources config, it should also be shown there instead of the previous HA groups view. Signed-off-by: Daniel Kral --- www/manager6/ha/Resources.js | 6 ++ www/manager6/ha/StatusView.js | 4 2 files changed, 10 insertions(+)

[pve-devel] [PATCH ha-manager v4 10/19] manager: migrate ha groups to node affinity rules in-memory

2025-07-29 Thread Daniel Kral
Migrate the currently configured groups to node affinity rules in-memory, so that they can be applied as such in the next patches and therefore replace HA groups internally. HA node affinity rules in their initial implementation are designed to be as restrictive as HA groups, i.e. only allow a HA

[pve-devel] [PATCH ha-manager v4 14/19] cli: expose ha rules api endpoints to ha-manager cli

2025-07-29 Thread Daniel Kral
Expose the HA rules API endpoints through the CLI in its own subcommand. The names of the subsubcommands are chosen to be consistent with the other commands provided by the ha-manager CLI for HA resources and groups, but grouped into a subcommand. The properties specified for the 'rules config' c

[pve-devel] [PATCH ha-manager v4 05/19] config, env, hw: add rules read and parse methods

2025-07-29 Thread Daniel Kral
Adds methods to the HA environment to read and write the rules configuration file for the different environment implementations. The HA Rules are initialized with property isolation since it is expected that other rule types will use similar property names with different semantic meanings and/or p

[pve-devel] [PATCH ha-manager v4 06/19] config: delete services from rules if services are deleted from config

2025-07-29 Thread Daniel Kral
Remove HA resources from rules, where these HA resources are used, if they are removed by delete_service_from_config(...), which is called by the HA resources' delete API endpoint and possibly external callers, e.g. if the HA resource is removed externally. If all of the rules' HA resources have b

[pve-devel] [PATCH ha-manager v4 01/19] tree-wide: make arguments for select_service_node explicit

2025-07-29 Thread Daniel Kral
Explicitly state all the parameters at all call sites for select_service_node(...) to clarify in which states these are. The call site in next_state_recovery(...) sets $best_scored to 1, as it should find the next best node when recovering from the failed node $current_node. All references to $bes

[pve-devel] applied: [PATCH pve-manager] use new pve-yew-mobile-gui

2025-07-29 Thread Thomas Lamprecht
On Tue, 22 Jul 2025 09:22:56 +0200, Dietmar Maurer wrote: > As replacement for the old sencha-touch based gui. > > Applied, thanks! I downgraded the dependency for now to a recommends and re-added a fallback for the old sencha touch UI in that case, but mostly so that I can move the pve-manager

[pve-devel] superseded: [RFC common/manager 0/3] arbitrary prefixes for pinning network interfaces

2025-07-29 Thread Stefan Hanreich
https://lore.proxmox.com/pve-devel/20250729171649.708219-1-s.hanre...@proxmox.com/T/#t On 7/24/25 4:49 PM, Stefan Hanreich wrote: > This patch series lifts the restriction for naming physical interfaces. > Previously we relied on a regex (PHYSICAL_NIC_RE) for determining whether an > interface was

[pve-devel] [PATCH common/manager v2 0/3] arbitrary prefixes for pinning network interfaces

2025-07-29 Thread Stefan Hanreich
This patch series lifts the restriction for naming physical interfaces. Previously we relied on a regex (PHYSICAL_NIC_RE) for determining whether an interface was physical or not. This patch series changes that, by querying the kernel for the type of the interface and using that to determine whethe

[pve-devel] [PATCH pve-common v2 1/1] inotify/interfaces: use ip link for detecting physical interfaces

2025-07-29 Thread Stefan Hanreich
The parser for /e/n/i relied on PHYSICAL_NIC_RE for detecting physical interfaces. In order to allow arbitrary interface names for pinning physical interfaces, switch over to detecting physical interfaces via 'ip link' instead. Signed-off-by: Stefan Hanreich --- src/PVE/INotify.pm | 25 +

[pve-devel] [PATCH pve-manager v2 2/2] network-interface-pinning: allow arbitrary names

2025-07-29 Thread Stefan Hanreich
With the changes to physical interface detection in pve-common and pve-manager, it is now possible to use arbitrary names for physical interfaces in our network stack. This allows the removal of the existing, hardcoded, prefixes. Signed-off-by: Stefan Hanreich --- PVE/CLI/proxmox_network_interfa

[pve-devel] [PATCH pve-manager v2 1/2] pvestatd: pull metric: use ip link to detect physical interfaces

2025-07-29 Thread Stefan Hanreich
pve-common now allows arbitrary names for physical interfaces, without being restricted by PHYSICAL_NIC_RE. In order to detect physical interfaces, pvestatd now needs to query 'ip link' for the type of an interface instead of relying on the regular expression. On the receiving end, PullMetric cann

Re: [pve-devel] Feature Request: Add SPAN / RSPAN / ERSPAN Traffic Mirroring Support in PVE WebUI

2025-07-29 Thread Stefan Hanreich
On 7/29/25 2:42 PM, z...@zslab.cn wrote: > Dear Proxmox VE Development Team, >  > Greetings! >  > First of all, thank you very much for your continued efforts and improvements > to Proxmox VE. It has become an essential tool in our daily virtualization > environment, offering great stability,

[pve-devel] applied: [PATCH manager] fix #6534: ui: keep displaying help button in backup edit dialog

2025-07-29 Thread Thomas Lamprecht
On Fri, 18 Jul 2025 15:38:46 +0200, Shannon Sterz wrote: > previously the help button would disappear once either the > "Notifications" or "Retention" tabs was opened. this removes an > unnecessary extra container and sets the value for all tab so that the > help button stays present. > > Applie

[pve-devel] applied: [PATCH pve-manager v3] metrics add OpenTelemetry support

2025-07-29 Thread Thomas Lamprecht
On Tue, 22 Jul 2025 17:55:27 +0800, nansen.su wrote: > Add OpenTelemetry metric type classification to fix Prometheus compatibility > > Problem > > The OpenTelemetry plugin was exporting all metrics as gauge type, causing > Prometheus/Grafana to show warnings like: > PromQL info: metric

Re: [pve-devel] [PATCH storage 25/26] update tests

2025-07-29 Thread Max R. Carrara
On Tue Jul 29, 2025 at 1:15 PM CEST, Wolfgang Bumiller wrote: > Signed-off-by: Wolfgang Bumiller > --- There are a couple tests for LVM and ZFS that seem to fail; relevant logs are below. I've also added some comments inline further below where possible. == ./run_test_zfspoolplugin.pl

Re: [pve-devel] [PATCH docs v3] pvecm, network: add section on corosync over bonds

2025-07-29 Thread Friedrich Weber
Thanks for taking a look! Discussed with HD off-list: - having the justification for the recommendations in the docs is good - but since the justification somewhat complex, probably not good to have it directly in the beginning of the new section. - It might be better to have the recommendations

Re: [pve-devel] [PATCH storage v2] fix #5181: pbs: store and read passwords as unicode

2025-07-29 Thread Thomas Lamprecht
Am 23.07.25 um 15:00 schrieb Shannon Sterz: > -->8 snip 8<-- >> -PVE::Tools::file_set_contents($pwfile, "$password\n"); >> +PVE::Tools::file_set_contents($pwfile, "$password\n", undef, 1); > i know this is pre-existing, but i'd feel more comfortable forcing the > permissions here rather tha

[pve-devel] applied: [PATCH manager 3/6] pve8to9: backup retention: increase severity of having 'maxfiles' setting configured

2025-07-29 Thread Thomas Lamprecht
On Fri, 18 Jul 2025 14:51:14 +0200, Fiona Ebner wrote: > The 'maxfiles' setting is dropped with Proxmox VE 9, so make having > the setting configured a proper error rather than just a warning. > > Applied to stable-8 branch, thanks! This does not really hurt to have even if we do not follow thr

Re: [pve-devel] superseded: [RFC http-server/manager 0/3] fix #5392: pveproxy, pvedaemon: make number of worker processes configurable

2025-07-29 Thread Friedrich Weber
Superseded by: https://lore.proxmox.com/pve-devel/20250729155227.157120-1-f.we...@proxmox.com/ ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] [PATCH http-server 1/1] api server: proxy config: read MAX_WORKERS integer key

2025-07-29 Thread Friedrich Weber
On 29/07/2025 13:44, Thomas Lamprecht wrote: > Am 29.07.25 um 13:35 schrieb Friedrich Weber: >> Read the MAX_WORKERS value in /etc/default/. If it is not >> an integer, ignore and warn. >> >> Signed-off-by: Friedrich Weber >> --- >> src/PVE/APIServer/Utils.pm | 7 +++ >> 1 file changed, 7 ins

[pve-devel] [PATCH docs/http-server/manager 0/4] fix #5392: pveproxy, pvedaemon: make number of worker processes configurable

2025-07-29 Thread Friedrich Weber
>From [1]: For pveproxy and pvedaemon, max_workers is currently hardcoded to 3 in PVE::Service::{pveproxy,pvedaemon}. This may not be enough for automation-heavy workloads that trigger a lot of API requests that are synchronously handled by pveproxy or pvedaemon, see e.g. #5391. This was also encou

[pve-devel] [PATCH manager 2/2] partially fix #5392: pvedaemon: make number of workers configurable

2025-07-29 Thread Friedrich Weber
The number of pvedaemon worker processes is currently hardcoded to 3. This may not be enough for automation-heavy workloads that trigger a lot of API requests that are synchronously handled by pvedaemon. Hence, read /etc/default/pvedaemon when starting pvedaemon and allow overriding the number of

[pve-devel] [PATCH manager 1/2] partially fix #5392: pveproxy: make number of workers configurable

2025-07-29 Thread Friedrich Weber
The number of pveproxy worker processes is currently hardcoded to 3. This may not be enough for automation-heavy workloads that trigger a lot of API requests that are synchronously handled by pveproxy. Hence, allow specifying MAX_WORKERS in /etc/default/pveproxy to override the number of workers.

[pve-devel] [PATCH http-server 1/1] api server: proxy config: read MAX_WORKERS integer key

2025-07-29 Thread Friedrich Weber
Read the MAX_WORKERS value in /etc/default/. If it is not an integer between 0 and 128, ignore and warn. The lower limit was chosen because at least one worker process must exist. The upper limit was chosen because more than 127 worker processes should not be necessary and a positive impact on per

[pve-devel] [PATCH docs 1/1] pveproxy, pvedaemon: document MAX_WORKERS setting

2025-07-29 Thread Friedrich Weber
For pveproxy, add it to the description of settings that can be adjusted in /etc/default/pveproxy. For pvedaemon, this is currently the only setting that can be adjusted in /etc/default/pvedaemon. Signed-off-by: Friedrich Weber --- pvedaemon.adoc | 17 + pveproxy.adoc | 18

[pve-devel] applied: [PATCH common v3 1/6] schema: support sizes with verbose suffixes {K, M, G, T}iB

2025-07-29 Thread Thomas Lamprecht
On Wed, 23 Jul 2025 13:57:36 +0200, Fiona Ebner wrote: > The single-letter suffixes are ambiguous and especially in the context > of disks, the powers of ten are usually used. Proxmox VE uses > multiples of 1024 however. > > This is in preparation to adapt format_size() to prefer the verbose > suf

[pve-devel] partially-applied: [RFC storage 00/26+10+3] unify vtype and content-type and

2025-07-29 Thread Fiona Ebner
Thanks! Applied the following patches to start out, ordering 05/26 first with a slight fix-up to avoid breaking tests: [pve-devel] [PATCH storage 01/26] btrfs: remove unnecessary mkpath call [pve-devel] [PATCH storage 02/26] parse_volname: remove openvz 'rootdir' case [pve-devel] [PATCH storage 03

Re: [pve-devel] [PATCH storage 06/26] common: use v5.36

2025-07-29 Thread Thomas Lamprecht
Am 29.07.25 um 15:59 schrieb Fiona Ebner: >> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm >> index 746a262..222dc76 100644 >> --- a/src/PVE/Storage/Common.pm >> +++ b/src/PVE/Storage/Common.pm >> @@ -1,7 +1,6 @@ >> package PVE::Storage::Common; >> >> -use strict; >> -use wa

Re: [pve-devel] [PATCH storage 06/26] common: use v5.36

2025-07-29 Thread Fiona Ebner
Am 29.07.25 um 1:16 PM schrieb Wolfgang Bumiller: > Signed-off-by: Wolfgang Bumiller > --- > src/PVE/Storage/Common.pm | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm > index 746a262..222dc76 100644 > --- a/src/PVE/

Re: [pve-devel] [PATCH storage 07/26] common: add pve-storage-vtype standard option with new types

2025-07-29 Thread Fiona Ebner
Am 29.07.25 um 1:16 PM schrieb Wolfgang Bumiller: > This adds the vm-vol and ct-vol vtypes introduced in the upcoming > commits. > > Signed-off-by: Wolfgang Bumiller > --- > src/PVE/Storage/Common.pm | 17 + > 1 file changed, 17 insertions(+) > > diff --git a/src/PVE/Storage/Com

[pve-devel] applied: [PATCH pve-storage v3] fix #6561: zfspool: track refquota for subvolumes via user properties

2025-07-29 Thread Fiona Ebner
On Tue, 29 Jul 2025 14:11:51 +0200, Shannon Sterz wrote: > zfs itself does not track the refquota per snapshot so we need handle > this ourselves. otherwise rolling back a volume that has been resize > since the snapshot, will retain the new size. this is problematic, as > it means the value in the

[pve-devel] applied: [PATCH manager stable-8+master] pve8to9: add check and script to ensure that 'keyring' option for external RBD storages is set

2025-07-29 Thread Thomas Lamprecht
On Wed, 16 Jul 2025 14:46:59 +0200, Fiona Ebner wrote: > With the switch from QEMU's -drive to -blockdev, it is not possible > anymore to pass along the Ceph 'keyring' option via the QEMU > commandline anymore, as was previously done for externally managed RBD > storages. For such storages, it is n

[pve-devel] applied-series: [PATCH storage/qemu-server v2 0/5] avoid absolute qcow2 references

2025-07-29 Thread Fiona Ebner
Am 29.07.25 um 1:53 PM schrieb Fabian Grünbichler: > we don't want qcow2 files to reference their backing chains via > absolute paths, as that makes renaming the base dir or VG of the storage > impossible. in most places, qemu already allows simply passing a > filename as backing-file reference, wh

Re: [pve-devel] [PATCH qemu-server v4 2/4] vmstatus: add memhost for host view of vm mem consumption

2025-07-29 Thread Lukas Wagner
On Sat Jul 26, 2025 at 3:06 AM CEST, Aaron Lauterer wrote: > +my $fh = > IO::File->new("/sys/fs/cgroup/qemu.slice/${vmid}.scope/cgroup.procs", "r"); > +if ($fh) { > +while (my $childPid = <$fh>) { > +chomp($childPid); nit: should be snake_case > +

[pve-devel] [PATCH qemu-server 10/10] tests: regenerate cfg2cmd files

2025-07-29 Thread Wolfgang Bumiller
Only node name hashes change at this point. Signed-off-by: Wolfgang Bumiller --- src/test/cfg2cmd/aio.conf.cmd | 28 +-- src/test/cfg2cmd/bootorder-empty.conf.cmd | 6 ++-- src/test/cfg2cmd/bootorder-legacy.conf.cmd| 6 ++-- src/test/cfg2cmd/bootorder.co

[pve-devel] Feature Request: Add SPAN / RSPAN / ERSPAN Traffic Mirroring Support in PVE WebUI

2025-07-29 Thread z...@zslab.cn
Dear Proxmox VE Development Team,  Greetings!  First of all, thank you very much for your continued efforts and improvements to Proxmox VE. It has become an essential tool in our daily virtualization environment, offering great stability, usability, and functionality.  I'm writing to submit a

[pve-devel] Feature Request: Add SPAN / RSPAN / ERSPAN Traffic Mirroring Support in PVE WebUI

2025-07-29 Thread z...@zslab.cn
Dear Proxmox VE Development Team,  Greetings!  First of all, thank you very much for your continued efforts and improvements to Proxmox VE. It has become an essential tool in our daily virtualization environment, offering great stability, usability, and functionality.  I'm writing to submit a

[pve-devel] applied: [RFC PATCH 1/2] frr: add networking.service as systemd dependency

2025-07-29 Thread Thomas Lamprecht
On Thu, 26 Jun 2025 15:12:12 +0200, Gabriel Goller wrote: > Add networking.service to the 'After' dependency directive. Guarantees that > the frr.service will start after the networking.service is done. > > We had some issues with data races between FRR and ifupdown [0], mostly > around the dummy

Re: [pve-devel] [PATCH docs v2] package repos: revise Ceph section

2025-07-29 Thread Max R. Carrara
On Fri Jul 25, 2025 at 12:34 PM CEST, Alexander Zeidler wrote: > - Start by mentioning the preconfigured Ceph repository and what options > there are for using Ceph (HCI and external cluster) > - Link to available installation methods (web-based wizard, CLI tool) > - Describe when and how to upgr

Re: [pve-devel] [PATCH many v4 00/31] Expand and migrate RRD data and add/change summary graphs

2025-07-29 Thread Lukas Wagner
On Sat Jul 26, 2025 at 3:05 AM CEST, Aaron Lauterer wrote: > This patch series does a few things. It expands the RRD format for nodes and > VMs. For all types (nodes, VMs, storage) we adjust the aggregation to align > them with the way they are done on the Backup Server. Therefore, we have new >

Re: [pve-devel] [PATCH storage v2 1/5] plugin: fix typo in rebase log message

2025-07-29 Thread Fabian Grünbichler
On July 29, 2025 2:04 pm, Fiona Ebner wrote: > Am 29.07.25 um 1:53 PM schrieb Fabian Grünbichler: >> by directly printing the to-be-executed command, instead of copying it which >> is >> error-prone. >> >> Signed-off-by: Fabian Grünbichler >> Reviewed-by: Fiona Ebner >> --- >> >> Notes: >>

Re: [pve-devel] [PATCH pve-storage v2] fix #6561: zfspool: track refquota for subvolumes via user properties

2025-07-29 Thread Shannon Sterz
Superseeded-by: https://lore.proxmox.com/pve-devel/20250729121151.159797-1-s.st...@proxmox.com/T/#u On Tue Jul 29, 2025 at 11:41 AM CEST, Shannon Sterz wrote: > zfs itself does not track the refquota per snapshot so we need handle > this ourselves. otherwise rolling back a volume that has been re

[pve-devel] [PATCH pve-storage v3] fix #6561: zfspool: track refquota for subvolumes via user properties

2025-07-29 Thread Shannon Sterz
zfs itself does not track the refquota per snapshot so we need handle this ourselves. otherwise rolling back a volume that has been resize since the snapshot, will retain the new size. this is problematic, as it means the value in the guest config does not longer match the size of the disk on the s

Re: [pve-devel] [PATCH manager v4 14/15] d/postinst: run promox-rrd-migration-tool

2025-07-29 Thread Lukas Wagner
On Sat Jul 26, 2025 at 3:06 AM CEST, Aaron Lauterer wrote: > Signed-off-by: Aaron Lauterer > --- > > Notes: > currently it checks for lt 9.0.0~12. should it only be applied to a > later version, don't forget to adapt the version check! > > I tested it by bumping the version to 9.0

Re: [pve-devel] [PATCH storage v2 1/5] plugin: fix typo in rebase log message

2025-07-29 Thread Fiona Ebner
Am 29.07.25 um 1:53 PM schrieb Fabian Grünbichler: > by directly printing the to-be-executed command, instead of copying it which > is > error-prone. > > Signed-off-by: Fabian Grünbichler > Reviewed-by: Fiona Ebner > --- > > Notes: > v2: join command instead of fixing manually copied messa

[pve-devel] superseded: [PATCH storage/qemu-server 0/5] avoid absolute qcow2 references

2025-07-29 Thread Fabian Grünbichler
by https://lore.proxmox.com/pve-devel/20250729115320.579286-1-f.gruenbich...@proxmox.com/T/#t On July 29, 2025 9:38 am, Fabian Grünbichler wrote: > we don't want qcow2 files to reference their backing chains via > absolute paths, as that makes renaming the base dir or VG of the storage > impossib

Re: [pve-devel] missing udev properties in PVE9 beta

2025-07-29 Thread Thomas Lamprecht
Hi Josh, Am 28.07.25 um 16:43 schrieb Joshua Huber: > Thanks for creating a Debian bug & cherry-picked MR. Fingers crossed > the changes flow through into PVE9. :) FYI: We just uploaded a build of sg3-utils with your patch included into the pve-test trixie repo, it's version 1.48-2+pmx1. I could

[pve-devel] [PATCH qemu-server v2 5/5] blockdev-stream/-commit: make backing file relative

2025-07-29 Thread Fabian Grünbichler
to avoid the resulting qcow2 file referencing its backing file via an absolute path, which makes renaming the base of the storage impossible. Signed-off-by: Fabian Grünbichler --- Notes: v2: move logic into its own helper src/PVE/QemuServer/Blockdev.pm | 24 ++-- 1 file

[pve-devel] [PATCH storage v2 2/5] lvm plugin: fix typo in rebase log message

2025-07-29 Thread Fabian Grünbichler
this was copied over from Plugin.pm Signed-off-by: Fabian Grünbichler Reviewed-by: Fiona Ebner --- src/PVE/Storage/LVMPlugin.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm index c1f5474..5a84e82 100644 --- a/src

[pve-devel] [PATCH storage/qemu-server v2 0/5] avoid absolute qcow2 references

2025-07-29 Thread Fabian Grünbichler
we don't want qcow2 files to reference their backing chains via absolute paths, as that makes renaming the base dir or VG of the storage impossible. in most places, qemu already allows simply passing a filename as backing-file reference, which will be interpreted as a reference relative to the back

[pve-devel] [PATCH storage v2 4/5] lvm plugin: use relative path for qcow2 rebase command

2025-07-29 Thread Fabian Grünbichler
otherwise the resulting qcow2 file will contain an absolute path, which makes renaming the backing VG of the storage impossible. Signed-off-by: Fabian Grünbichler Reviewed-by: Fiona Ebner --- Notes: v2: drop unused variable src/PVE/Storage/LVMPlugin.pm | 4 ++-- 1 file changed, 2 insertio

[pve-devel] [PATCH storage v2 3/5] plugin: use relative path for qcow2 rebase command

2025-07-29 Thread Fabian Grünbichler
otherwise the resulting qcow2 file will contain an absolute path, which makes changing the backing path of the directory storage impossible. Signed-off-by: Fabian Grünbichler Reviewed-by: Fiona Ebner Tested-by: Fiona Ebner --- src/PVE/Storage/Plugin.pm | 4 ++-- 1 file changed, 2 insertions(+)

[pve-devel] [PATCH storage v2 1/5] plugin: fix typo in rebase log message

2025-07-29 Thread Fabian Grünbichler
by directly printing the to-be-executed command, instead of copying it which is error-prone. Signed-off-by: Fabian Grünbichler Reviewed-by: Fiona Ebner --- Notes: v2: join command instead of fixing manually copied message src/PVE/Storage/Plugin.pm | 2 +- 1 file changed, 1 insertion(+), 1

Re: [pve-devel] [PATCH ha-manager v3 04/13] rules: add global checks between node and resource affinity rules

2025-07-29 Thread Michael Köppl
On 7/4/25 20:20, Daniel Kral wrote: > > diff --git a/src/PVE/HA/Rules.pm b/src/PVE/HA/Rules.pm > index 3121424..892e7aa 100644 > --- a/src/PVE/HA/Rules.pm > +++ b/src/PVE/HA/Rules.pm > @@ -6,6 +6,7 @@ use warnings; > use PVE::JSONSchema qw(get_standard_option); > use PVE::Tools; > > +use PVE::

Re: [pve-devel] [PATCH http-server 1/1] api server: proxy config: read MAX_WORKERS integer key

2025-07-29 Thread Thomas Lamprecht
Am 29.07.25 um 13:35 schrieb Friedrich Weber: > Read the MAX_WORKERS value in /etc/default/. If it is not > an integer, ignore and warn. > > Signed-off-by: Friedrich Weber > --- > src/PVE/APIServer/Utils.pm | 7 +++ > 1 file changed, 7 insertions(+) > > diff --git a/src/PVE/APIServer/Utils.

[pve-devel] [PATCH manager 2/2] partially fix #5392: pvedaemon: make number of workers configurable

2025-07-29 Thread Friedrich Weber
The number of pvedaemon worker processes is currently hardcoded to 3. This may not be enough for automation-heavy workloads that trigger a lot of API requests that are synchronously handled by pvedaemon. Hence, read /etc/default/pvedaemon when starting pvedaemon and allow overriding the number of

[pve-devel] [PATCH http-server 1/1] api server: proxy config: read MAX_WORKERS integer key

2025-07-29 Thread Friedrich Weber
Read the MAX_WORKERS value in /etc/default/. If it is not an integer, ignore and warn. Signed-off-by: Friedrich Weber --- src/PVE/APIServer/Utils.pm | 7 +++ 1 file changed, 7 insertions(+) diff --git a/src/PVE/APIServer/Utils.pm b/src/PVE/APIServer/Utils.pm index 1430c98..f2c4892 100644 --

[pve-devel] [RFC http-server/manager 0/3] fix #5392: pveproxy, pvedaemon: make number of worker processes configurable

2025-07-29 Thread Friedrich Weber
>From [1]: For pveproxy and pvedaemon, max_workers is currently hardcoded to 3 in PVE::Service::{pveproxy,pvedaemon}. This may not be enough for automation-heavy workloads that trigger a lot of API requests that are synchronously handled by pveproxy or pvedaemon, see e.g. #5391. This was also encou

[pve-devel] [PATCH manager 1/2] partially fix #5392: pveproxy: make number of workers configurable

2025-07-29 Thread Friedrich Weber
The number of pveproxy worker processes is currently hardcoded to 3. This may not be enough for automation-heavy workloads that trigger a lot of API requests that are synchronously handled by pveproxy. Hence, allow specifying MAX_WORKERS in /etc/default/pveproxy to override the number of workers.

[pve-devel] [PATCH storage 17/26] plugins: add vtype parameter to alloc_image

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/PVE/Storage/BTRFSPlugin.pm | 25 src/PVE/Storage/ESXiPlugin.pm| 2 +- src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +- src/PVE/Storage/ISCSIPlugin.pm | 2 +- src/PVE/Storage/LVMPlugin.pm | 35 - src/P

[pve-devel] [PATCH storage 20/26] plugins: update rename_volumes methods

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/PVE/Storage/BTRFSPlugin.pm | 12 ++-- src/PVE/Storage/LVMPlugin.pm | 11 +-- src/PVE/Storage/Plugin.pm| 26 ++ src/PVE/Storage/RBDPlugin.pm | 11 +-- src/PVE/Storage/ZFSPoolPlugin.pm | 11 ++

[pve-devel] [PATCH qemu-server 04/10] expect 'vm-vol' vtype wherever 'images' was expected

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/PVE/API2/Qemu.pm | 16 +--- src/PVE/QemuMigrate.pm | 3 ++- src/PVE/QemuServer.pm | 6 -- 3 files changed, 15 insertions(+), 10 deletions(-) diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm index 76883a5b..deb2eae8 100644 ---

[pve-devel] [PATCH container 1/3] add vtype to vdisk_alloc and vdisk_clone calls

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/PVE/API2/LXC.pm | 9 +++-- src/PVE/LXC.pm | 20 2 files changed, 19 insertions(+), 10 deletions(-) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index a56c441..aa11fc6 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/

[pve-devel] [PATCH qemu-server 05/10] tests: update QmMock to support vtypes

2025-07-29 Thread Wolfgang Bumiller
This currently breaks the tests. The accompanying test case fixes happen automated via shell commands in the next commit. Signed-off-by: Wolfgang Bumiller --- src/test/MigrationTest/QmMock.pm | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/test/MigrationTest/QmMock.

[pve-devel] [PATCH qemu-server 07/10] make tidy

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/test/run_qemu_img_convert_tests.pl | 53 ++ 1 file changed, 38 insertions(+), 15 deletions(-) diff --git a/src/test/run_qemu_img_convert_tests.pl b/src/test/run_qemu_img_convert_tests.pl index 2acbbef4..cfb1586f 100755 --- a/src/t

[pve-devel] [PATCH storage 25/26] update tests

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/test/list_volumes_test.pm | 73 -- src/test/parse_volname_test.pm | 34 -- src/test/path_to_volume_id_test.pm | 27 +++ src/test/run_test_lvmplugin.pl | 11 +++-- src/test/run_test_zfspoolplugin.

[pve-devel] [PATCH storage 21/26] plugins: update volume_import methods

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/PVE/Storage/BTRFSPlugin.pm | 11 +- src/PVE/Storage/LVMPlugin.pm | 12 ++- src/PVE/Storage/LvmThinPlugin.pm | 62 src/PVE/Storage/Plugin.pm| 12 ++- src/PVE/Storage/RBDPlugin.pm | 13 +-- s

[pve-devel] [PATCH storage 19/26] plugins: update clone_image methods

2025-07-29 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller --- src/PVE/Storage/BTRFSPlugin.pm | 12 ++-- src/PVE/Storage/ESXiPlugin.pm| 2 +- src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +- src/PVE/Storage/ISCSIPlugin.pm | 2 +- src/PVE/Storage/LVMPlugin.pm | 2 +- src/PVE/Storage/LvmT

  1   2   >