This is another difference with the apparmor 4.0 userspace. We need to
explicitly enable user namespaces in the generated profile - at least
when nesting is enabled.
Signed-off-by: Wolfgang Bumiller
---
src/PVE/LXC.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PV
This patch implements the UI components for the hidden storage feature:
- Add hidden checkbox to iSCSI storage edit dialog
- Filter hidden storages from resource tree display
- Include hidden property in storage API response
- Show hidden status in storage management view
This is part
This patch adds a 'hidden' boolean option to the iSCSI storage plugin.
When enabled, it allows hiding the storage from the resource tree in
the web interface while keeping it functional for actual storage operations.
This is part 1 of 2 patches that together implement hidden storage
supp
On Wed, 23 Jul 2025 12:54:37 +0200, Maximiliano Sandoval wrote:
> In commit c26b474 the if condition was removed in favor of the if guard
> on the higher level. However, this introduced functional change, the
> watchdog now updates immediately after update_watchdog is set to 0,
> potentially causin
On Tue, 29 Jul 2025 19:16:45 +0200, Stefan Hanreich wrote:
> With the changes to physical interface detection in pve-common and
> pve-manager, it is now possible to use arbitrary names for physical
> interfaces in our network stack. This allows the removal of the
> existing, hardcoded, prefixes.
>
On Tue, 29 Jul 2025 19:16:43 +0200, Stefan Hanreich wrote:
> The parser for /e/n/i relied on PHYSICAL_NIC_RE for detecting physical
> interfaces. In order to allow arbitrary interface names for pinning
> physical interfaces, switch over to detecting physical interfaces via
> 'ip link' instead.
>
>
On Tue, 29 Jul 2025 19:16:44 +0200, Stefan Hanreich wrote:
> pve-common now allows arbitrary names for physical interfaces, without
> being restricted by PHYSICAL_NIC_RE. In order to detect physical
> interfaces, pvestatd now needs to query 'ip link' for the type of an
> interface instead of relyin
On Tue, 29 Jul 2025 17:50:58 +0200, Friedrich Weber wrote:
> For pveproxy, add it to the description of settings that can be
> adjusted in /etc/default/pveproxy.
>
> For pvedaemon, this is currently the only setting that can be adjusted
> in /etc/default/pvedaemon.
>
>
> [...]
Applied, thanks!
On Tue, 29 Jul 2025 17:50:55 +0200, Friedrich Weber wrote:
> Read the MAX_WORKERS value in /etc/default/. If it is not
> an integer between 0 and 128, ignore and warn.
>
> The lower limit was chosen because at least one worker process must
> exist. The upper limit was chosen because more than 127
On Tue, 29 Jul 2025 17:50:56 +0200, Friedrich Weber wrote:
> The number of pveproxy worker processes is currently hardcoded to 3.
> This may not be enough for automation-heavy workloads that trigger a
> lot of API requests that are synchronously handled by pveproxy.
>
> Hence, allow specifying MAX
On Tue, 29 Jul 2025 17:50:57 +0200, Friedrich Weber wrote:
> The number of pvedaemon worker processes is currently hardcoded to 3.
> This may not be enough for automation-heavy workloads that trigger a
> lot of API requests that are synchronously handled by pvedaemon.
>
> Hence, read /etc/default/
Add test cases to verify that the node affinity rules, which will be
added in a following patch, are functionally equivalent to the
existing HA groups.
These test cases verify the following scenarios for (a) unrestricted and
(b) restricted groups (i.e. non-strict and strict node affinity rules):
Migrate the HA groups config to the HA resources and HA rules config
persistently on disk and retry until it succeeds. The HA group config is
already migrated in the HA Manager in-memory, but to persistently use
them as HA node affinity rules, they must be migrated to the HA rules
config.
As the n
As these test cases do work with node affinity rules now, correctly
replace references to unrestricted/restricted groups with
non-strict/strict node affinity rules and also replace "nofailback" with
"disabled failback".
Signed-off-by: Daniel Kral
---
src/test/test-crs-static2/README
Add documentation about HA Node Affinity rules and general documentation
what HA rules are for in a format that is extendable with other HA rule
types in the future.
Signed-off-by: Daniel Kral
append to ha intro
Signed-off-by: Daniel Kral
---
Makefile | 2 +
gen-ha
This is done, because in an upcoming patch, which persistently migrates
HA groups to node affinity rules, it would make all these test cases try
to migrate the HA groups config to the service and rules config. As this
is not the responsibility of these test cases and HA groups become
deprecated any
Introduce HA rules and replace the existing HA groups with the new HA
node affinity rules in the web interface.
The HA rules components are designed to be extendible for other new rule
types and allow users to display the errors of contradictory HA rules,
if there are any, in addition to the other
Replace the HA group mechanism with the functionally equivalent node
affinity rules' get_node_affinity(...), which enforces the node affinity
rules defined in the rules config.
This allows the $groups parameter to be replaced with the $rules
parameter in select_service_node(...) as all behavior of
Here's a quick update on the core HA rules series. This cleans up the
series so that all tests are running again and includes the missing ui
patch that I didn't see missing last time.
The persistent migration path has been tested for at least four full
upgrade runs now, always with one node being
Add information about the effects that HA rules and HA Node Affinity
rules have on the CRS scheduler and what can be expected by a user if
they do changes to them.
Signed-off-by: Daniel Kral
---
ha-manager.adoc | 10 ++
1 file changed, 10 insertions(+)
diff --git a/ha-manager.adoc b/ha-
Remove the HA group column from the HA Resources grid view and the HA
group selector from the HA Resources edit window, as these will be
replaced by semantically equivalent HA node affinity rules in the next
patch.
Add the field 'failback' that is moved to the HA Resources config as
part of the mi
Add the failback property in the HA resources config, which is
functionally equivalent to the negation of the HA group's nofailback
property. It will be used to migrate HA groups to HA node affinity
rules.
The 'failback' flag is set to be enabled by default as the HA group's
nofailback property wa
Introduce the node affinity rule plugin to allow users to specify node
affinity constraints for independent HA resources.
Node affinity rules must specify one or more HA resources, one or more
nodes with optional priorities (the default is 0), and a strictness,
which is either
* 0 (non-strict):
Read the rules configuration in each round and update the canonicalized
rules configuration if there were any changes since the last round to
reduce the amount of times of verifying the rule set.
Signed-off-by: Daniel Kral
---
src/PVE/HA/Manager.pm | 20 +++-
1 file changed, 19 i
As the signature of select_service_node(...) has become rather long
already, make it more compact by retrieving service- and
affinity-related data directly from the service state in $sd and
introduce a $node_preference parameter to distinguish the behaviors of
$try_next and $best_scored, which have
Add test cases to verify that the rule checkers correctly identify and
remove HA rules from the rules to make the rule set feasible. For now,
there only are HA Node Affinity rules, which verify:
- Node Affinity rules retrieve the correct optional default values
- Node Affinity rules, which specify
Add CRUD API endpoints for HA rules, which assert whether the given
properties for the rules are valid and will not make the existing rule
set infeasible.
Disallowing changes to the rule set via the API, which would make this
and other rules infeasible, makes it safer for users of the HA Manager
t
Signed-off-by: Daniel Kral
---
PVE/API2/HAConfig.pm | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/HAConfig.pm b/PVE/API2/HAConfig.pm
index 35f49cbb..d29211fb 100644
--- a/PVE/API2/HAConfig.pm
+++ b/PVE/API2/HAConfig.pm
@@ -12,6 +12,7 @@ use PVE::JSONSchema qw
Add a rules base plugin to allow users to specify different kinds of HA
rules in a single configuration file, which put constraints on the HA
Manager's behavior.
Signed-off-by: Daniel Kral
---
debian/pve-ha-manager.install | 1 +
src/PVE/HA/Makefile | 2 +-
src/PVE/HA/Rules.pm
As none of the existing HA test cases rely on the default HA groups
created by the simulated hardware anymore, create them only for the
ha-simulator hardware.
This is done, because in an upcoming patch, which persistently migrates
HA groups to node affinity rules, it would unnecessarily fire the
m
Allow callees of update_service_config(...) to provide properties, which
should be deleted from a HA resource config.
This is needed for the migration of HA groups, as the 'group' property
must be removed to completely migrate these to the respective HA
resource configs. Otherwise, these groups wo
As the HA groups' failback flag is now being part of the HA resources
config, it should also be shown there instead of the previous HA groups
view.
Signed-off-by: Daniel Kral
---
www/manager6/ha/Resources.js | 6 ++
www/manager6/ha/StatusView.js | 4
2 files changed, 10 insertions(+)
Migrate the currently configured groups to node affinity rules
in-memory, so that they can be applied as such in the next patches and
therefore replace HA groups internally.
HA node affinity rules in their initial implementation are designed to
be as restrictive as HA groups, i.e. only allow a HA
Expose the HA rules API endpoints through the CLI in its own subcommand.
The names of the subsubcommands are chosen to be consistent with the
other commands provided by the ha-manager CLI for HA resources and
groups, but grouped into a subcommand.
The properties specified for the 'rules config' c
Adds methods to the HA environment to read and write the rules
configuration file for the different environment implementations.
The HA Rules are initialized with property isolation since it is
expected that other rule types will use similar property names with
different semantic meanings and/or p
Remove HA resources from rules, where these HA resources are used, if
they are removed by delete_service_from_config(...), which is called by
the HA resources' delete API endpoint and possibly external callers,
e.g. if the HA resource is removed externally.
If all of the rules' HA resources have b
Explicitly state all the parameters at all call sites for
select_service_node(...) to clarify in which states these are.
The call site in next_state_recovery(...) sets $best_scored to 1, as it
should find the next best node when recovering from the failed node
$current_node. All references to $bes
On Tue, 22 Jul 2025 09:22:56 +0200, Dietmar Maurer wrote:
> As replacement for the old sencha-touch based gui.
>
>
Applied, thanks! I downgraded the dependency for now to a recommends and
re-added a fallback for the old sencha touch UI in that case, but mostly so
that I can move the pve-manager
https://lore.proxmox.com/pve-devel/20250729171649.708219-1-s.hanre...@proxmox.com/T/#t
On 7/24/25 4:49 PM, Stefan Hanreich wrote:
> This patch series lifts the restriction for naming physical interfaces.
> Previously we relied on a regex (PHYSICAL_NIC_RE) for determining whether an
> interface was
This patch series lifts the restriction for naming physical interfaces.
Previously we relied on a regex (PHYSICAL_NIC_RE) for determining whether an
interface was physical or not. This patch series changes that, by querying the
kernel for the type of the interface and using that to determine whethe
The parser for /e/n/i relied on PHYSICAL_NIC_RE for detecting physical
interfaces. In order to allow arbitrary interface names for pinning
physical interfaces, switch over to detecting physical interfaces via
'ip link' instead.
Signed-off-by: Stefan Hanreich
---
src/PVE/INotify.pm | 25 +
With the changes to physical interface detection in pve-common and
pve-manager, it is now possible to use arbitrary names for physical
interfaces in our network stack. This allows the removal of the
existing, hardcoded, prefixes.
Signed-off-by: Stefan Hanreich
---
PVE/CLI/proxmox_network_interfa
pve-common now allows arbitrary names for physical interfaces, without
being restricted by PHYSICAL_NIC_RE. In order to detect physical
interfaces, pvestatd now needs to query 'ip link' for the type of an
interface instead of relying on the regular expression.
On the receiving end, PullMetric cann
On 7/29/25 2:42 PM, z...@zslab.cn wrote:
> Dear Proxmox VE Development Team,
>
> Greetings!
>
> First of all, thank you very much for your continued efforts and improvements
> to Proxmox VE. It has become an essential tool in our daily virtualization
> environment, offering great stability,
On Fri, 18 Jul 2025 15:38:46 +0200, Shannon Sterz wrote:
> previously the help button would disappear once either the
> "Notifications" or "Retention" tabs was opened. this removes an
> unnecessary extra container and sets the value for all tab so that the
> help button stays present.
>
>
Applie
On Tue, 22 Jul 2025 17:55:27 +0800, nansen.su wrote:
> Add OpenTelemetry metric type classification to fix Prometheus compatibility
>
> Problem
>
> The OpenTelemetry plugin was exporting all metrics as gauge type, causing
> Prometheus/Grafana to show warnings like:
> PromQL info: metric
On Tue Jul 29, 2025 at 1:15 PM CEST, Wolfgang Bumiller wrote:
> Signed-off-by: Wolfgang Bumiller
> ---
There are a couple tests for LVM and ZFS that seem to fail; relevant
logs are below.
I've also added some comments inline further below where possible.
==
./run_test_zfspoolplugin.pl
Thanks for taking a look!
Discussed with HD off-list:
- having the justification for the recommendations in the docs is good
- but since the justification somewhat complex, probably not good to
have it directly in the beginning of the new section.
- It might be better to have the recommendations
Am 23.07.25 um 15:00 schrieb Shannon Sterz:
> -->8 snip 8<--
>> -PVE::Tools::file_set_contents($pwfile, "$password\n");
>> +PVE::Tools::file_set_contents($pwfile, "$password\n", undef, 1);
> i know this is pre-existing, but i'd feel more comfortable forcing the
> permissions here rather tha
On Fri, 18 Jul 2025 14:51:14 +0200, Fiona Ebner wrote:
> The 'maxfiles' setting is dropped with Proxmox VE 9, so make having
> the setting configured a proper error rather than just a warning.
>
>
Applied to stable-8 branch, thanks!
This does not really hurt to have even if we do not follow thr
Superseded by:
https://lore.proxmox.com/pve-devel/20250729155227.157120-1-f.we...@proxmox.com/
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 29/07/2025 13:44, Thomas Lamprecht wrote:
> Am 29.07.25 um 13:35 schrieb Friedrich Weber:
>> Read the MAX_WORKERS value in /etc/default/. If it is not
>> an integer, ignore and warn.
>>
>> Signed-off-by: Friedrich Weber
>> ---
>> src/PVE/APIServer/Utils.pm | 7 +++
>> 1 file changed, 7 ins
>From [1]: For pveproxy and pvedaemon, max_workers is currently hardcoded to 3
in PVE::Service::{pveproxy,pvedaemon}. This may not be enough for
automation-heavy workloads that trigger a lot of API requests that are
synchronously handled by pveproxy or pvedaemon, see e.g. #5391. This was also
encou
The number of pvedaemon worker processes is currently hardcoded to 3.
This may not be enough for automation-heavy workloads that trigger a
lot of API requests that are synchronously handled by pvedaemon.
Hence, read /etc/default/pvedaemon when starting pvedaemon and allow
overriding the number of
The number of pveproxy worker processes is currently hardcoded to 3.
This may not be enough for automation-heavy workloads that trigger a
lot of API requests that are synchronously handled by pveproxy.
Hence, allow specifying MAX_WORKERS in /etc/default/pveproxy to
override the number of workers.
Read the MAX_WORKERS value in /etc/default/. If it is not
an integer between 0 and 128, ignore and warn.
The lower limit was chosen because at least one worker process must
exist. The upper limit was chosen because more than 127 worker
processes should not be necessary and a positive impact on per
For pveproxy, add it to the description of settings that can be
adjusted in /etc/default/pveproxy.
For pvedaemon, this is currently the only setting that can be adjusted
in /etc/default/pvedaemon.
Signed-off-by: Friedrich Weber
---
pvedaemon.adoc | 17 +
pveproxy.adoc | 18
On Wed, 23 Jul 2025 13:57:36 +0200, Fiona Ebner wrote:
> The single-letter suffixes are ambiguous and especially in the context
> of disks, the powers of ten are usually used. Proxmox VE uses
> multiples of 1024 however.
>
> This is in preparation to adapt format_size() to prefer the verbose
> suf
Thanks! Applied the following patches to start out, ordering 05/26 first
with a slight fix-up to avoid breaking tests:
[pve-devel] [PATCH storage 01/26] btrfs: remove unnecessary mkpath call
[pve-devel] [PATCH storage 02/26] parse_volname: remove openvz 'rootdir'
case
[pve-devel] [PATCH storage 03
Am 29.07.25 um 15:59 schrieb Fiona Ebner:
>> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
>> index 746a262..222dc76 100644
>> --- a/src/PVE/Storage/Common.pm
>> +++ b/src/PVE/Storage/Common.pm
>> @@ -1,7 +1,6 @@
>> package PVE::Storage::Common;
>>
>> -use strict;
>> -use wa
Am 29.07.25 um 1:16 PM schrieb Wolfgang Bumiller:
> Signed-off-by: Wolfgang Bumiller
> ---
> src/PVE/Storage/Common.pm | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
> index 746a262..222dc76 100644
> --- a/src/PVE/
Am 29.07.25 um 1:16 PM schrieb Wolfgang Bumiller:
> This adds the vm-vol and ct-vol vtypes introduced in the upcoming
> commits.
>
> Signed-off-by: Wolfgang Bumiller
> ---
> src/PVE/Storage/Common.pm | 17 +
> 1 file changed, 17 insertions(+)
>
> diff --git a/src/PVE/Storage/Com
On Tue, 29 Jul 2025 14:11:51 +0200, Shannon Sterz wrote:
> zfs itself does not track the refquota per snapshot so we need handle
> this ourselves. otherwise rolling back a volume that has been resize
> since the snapshot, will retain the new size. this is problematic, as
> it means the value in the
On Wed, 16 Jul 2025 14:46:59 +0200, Fiona Ebner wrote:
> With the switch from QEMU's -drive to -blockdev, it is not possible
> anymore to pass along the Ceph 'keyring' option via the QEMU
> commandline anymore, as was previously done for externally managed RBD
> storages. For such storages, it is n
Am 29.07.25 um 1:53 PM schrieb Fabian Grünbichler:
> we don't want qcow2 files to reference their backing chains via
> absolute paths, as that makes renaming the base dir or VG of the storage
> impossible. in most places, qemu already allows simply passing a
> filename as backing-file reference, wh
On Sat Jul 26, 2025 at 3:06 AM CEST, Aaron Lauterer wrote:
> +my $fh =
> IO::File->new("/sys/fs/cgroup/qemu.slice/${vmid}.scope/cgroup.procs", "r");
> +if ($fh) {
> +while (my $childPid = <$fh>) {
> +chomp($childPid);
nit: should be snake_case
> +
Only node name hashes change at this point.
Signed-off-by: Wolfgang Bumiller
---
src/test/cfg2cmd/aio.conf.cmd | 28 +--
src/test/cfg2cmd/bootorder-empty.conf.cmd | 6 ++--
src/test/cfg2cmd/bootorder-legacy.conf.cmd| 6 ++--
src/test/cfg2cmd/bootorder.co
Dear Proxmox VE Development Team,
Greetings!
First of all, thank you very much for your continued efforts and improvements
to Proxmox VE. It has become an essential tool in our daily virtualization
environment, offering great stability, usability, and functionality.
I'm writing to submit a
Dear Proxmox VE Development Team,
Greetings!
First of all, thank you very much for your continued efforts and improvements
to Proxmox VE. It has become an essential tool in our daily virtualization
environment, offering great stability, usability, and functionality.
I'm writing to submit a
On Thu, 26 Jun 2025 15:12:12 +0200, Gabriel Goller wrote:
> Add networking.service to the 'After' dependency directive. Guarantees that
> the frr.service will start after the networking.service is done.
>
> We had some issues with data races between FRR and ifupdown [0], mostly
> around the dummy
On Fri Jul 25, 2025 at 12:34 PM CEST, Alexander Zeidler wrote:
> - Start by mentioning the preconfigured Ceph repository and what options
> there are for using Ceph (HCI and external cluster)
> - Link to available installation methods (web-based wizard, CLI tool)
> - Describe when and how to upgr
On Sat Jul 26, 2025 at 3:05 AM CEST, Aaron Lauterer wrote:
> This patch series does a few things. It expands the RRD format for nodes and
> VMs. For all types (nodes, VMs, storage) we adjust the aggregation to align
> them with the way they are done on the Backup Server. Therefore, we have new
>
On July 29, 2025 2:04 pm, Fiona Ebner wrote:
> Am 29.07.25 um 1:53 PM schrieb Fabian Grünbichler:
>> by directly printing the to-be-executed command, instead of copying it which
>> is
>> error-prone.
>>
>> Signed-off-by: Fabian Grünbichler
>> Reviewed-by: Fiona Ebner
>> ---
>>
>> Notes:
>>
Superseeded-by:
https://lore.proxmox.com/pve-devel/20250729121151.159797-1-s.st...@proxmox.com/T/#u
On Tue Jul 29, 2025 at 11:41 AM CEST, Shannon Sterz wrote:
> zfs itself does not track the refquota per snapshot so we need handle
> this ourselves. otherwise rolling back a volume that has been re
zfs itself does not track the refquota per snapshot so we need handle
this ourselves. otherwise rolling back a volume that has been resize
since the snapshot, will retain the new size. this is problematic, as
it means the value in the guest config does not longer match the size
of the disk on the s
On Sat Jul 26, 2025 at 3:06 AM CEST, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer
> ---
>
> Notes:
> currently it checks for lt 9.0.0~12. should it only be applied to a
> later version, don't forget to adapt the version check!
>
> I tested it by bumping the version to 9.0
Am 29.07.25 um 1:53 PM schrieb Fabian Grünbichler:
> by directly printing the to-be-executed command, instead of copying it which
> is
> error-prone.
>
> Signed-off-by: Fabian Grünbichler
> Reviewed-by: Fiona Ebner
> ---
>
> Notes:
> v2: join command instead of fixing manually copied messa
by
https://lore.proxmox.com/pve-devel/20250729115320.579286-1-f.gruenbich...@proxmox.com/T/#t
On July 29, 2025 9:38 am, Fabian Grünbichler wrote:
> we don't want qcow2 files to reference their backing chains via
> absolute paths, as that makes renaming the base dir or VG of the storage
> impossib
Hi Josh,
Am 28.07.25 um 16:43 schrieb Joshua Huber:
> Thanks for creating a Debian bug & cherry-picked MR. Fingers crossed
> the changes flow through into PVE9. :)
FYI: We just uploaded a build of sg3-utils with your patch included
into the pve-test trixie repo, it's version 1.48-2+pmx1.
I could
to avoid the resulting qcow2 file referencing its backing file via an absolute
path, which makes renaming the base of the storage impossible.
Signed-off-by: Fabian Grünbichler
---
Notes:
v2: move logic into its own helper
src/PVE/QemuServer/Blockdev.pm | 24 ++--
1 file
this was copied over from Plugin.pm
Signed-off-by: Fabian Grünbichler
Reviewed-by: Fiona Ebner
---
src/PVE/Storage/LVMPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index c1f5474..5a84e82 100644
--- a/src
we don't want qcow2 files to reference their backing chains via
absolute paths, as that makes renaming the base dir or VG of the storage
impossible. in most places, qemu already allows simply passing a
filename as backing-file reference, which will be interpreted as a
reference relative to the back
otherwise the resulting qcow2 file will contain an absolute path, which makes
renaming the backing VG of the storage impossible.
Signed-off-by: Fabian Grünbichler
Reviewed-by: Fiona Ebner
---
Notes:
v2: drop unused variable
src/PVE/Storage/LVMPlugin.pm | 4 ++--
1 file changed, 2 insertio
otherwise the resulting qcow2 file will contain an absolute path, which makes
changing the backing path of the directory storage impossible.
Signed-off-by: Fabian Grünbichler
Reviewed-by: Fiona Ebner
Tested-by: Fiona Ebner
---
src/PVE/Storage/Plugin.pm | 4 ++--
1 file changed, 2 insertions(+)
by directly printing the to-be-executed command, instead of copying it which is
error-prone.
Signed-off-by: Fabian Grünbichler
Reviewed-by: Fiona Ebner
---
Notes:
v2: join command instead of fixing manually copied message
src/PVE/Storage/Plugin.pm | 2 +-
1 file changed, 1 insertion(+), 1
On 7/4/25 20:20, Daniel Kral wrote:
>
> diff --git a/src/PVE/HA/Rules.pm b/src/PVE/HA/Rules.pm
> index 3121424..892e7aa 100644
> --- a/src/PVE/HA/Rules.pm
> +++ b/src/PVE/HA/Rules.pm
> @@ -6,6 +6,7 @@ use warnings;
> use PVE::JSONSchema qw(get_standard_option);
> use PVE::Tools;
>
> +use PVE::
Am 29.07.25 um 13:35 schrieb Friedrich Weber:
> Read the MAX_WORKERS value in /etc/default/. If it is not
> an integer, ignore and warn.
>
> Signed-off-by: Friedrich Weber
> ---
> src/PVE/APIServer/Utils.pm | 7 +++
> 1 file changed, 7 insertions(+)
>
> diff --git a/src/PVE/APIServer/Utils.
The number of pvedaemon worker processes is currently hardcoded to 3.
This may not be enough for automation-heavy workloads that trigger a
lot of API requests that are synchronously handled by pvedaemon.
Hence, read /etc/default/pvedaemon when starting pvedaemon and allow
overriding the number of
Read the MAX_WORKERS value in /etc/default/. If it is not
an integer, ignore and warn.
Signed-off-by: Friedrich Weber
---
src/PVE/APIServer/Utils.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/src/PVE/APIServer/Utils.pm b/src/PVE/APIServer/Utils.pm
index 1430c98..f2c4892 100644
--
>From [1]: For pveproxy and pvedaemon, max_workers is currently hardcoded to 3
in PVE::Service::{pveproxy,pvedaemon}. This may not be enough for
automation-heavy workloads that trigger a lot of API requests that are
synchronously handled by pveproxy or pvedaemon, see e.g. #5391. This was also
encou
The number of pveproxy worker processes is currently hardcoded to 3.
This may not be enough for automation-heavy workloads that trigger a
lot of API requests that are synchronously handled by pveproxy.
Hence, allow specifying MAX_WORKERS in /etc/default/pveproxy to
override the number of workers.
Signed-off-by: Wolfgang Bumiller
---
src/PVE/Storage/BTRFSPlugin.pm | 25
src/PVE/Storage/ESXiPlugin.pm| 2 +-
src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
src/PVE/Storage/ISCSIPlugin.pm | 2 +-
src/PVE/Storage/LVMPlugin.pm | 35 -
src/P
Signed-off-by: Wolfgang Bumiller
---
src/PVE/Storage/BTRFSPlugin.pm | 12 ++--
src/PVE/Storage/LVMPlugin.pm | 11 +--
src/PVE/Storage/Plugin.pm| 26 ++
src/PVE/Storage/RBDPlugin.pm | 11 +--
src/PVE/Storage/ZFSPoolPlugin.pm | 11 ++
Signed-off-by: Wolfgang Bumiller
---
src/PVE/API2/Qemu.pm | 16 +---
src/PVE/QemuMigrate.pm | 3 ++-
src/PVE/QemuServer.pm | 6 --
3 files changed, 15 insertions(+), 10 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 76883a5b..deb2eae8 100644
---
Signed-off-by: Wolfgang Bumiller
---
src/PVE/API2/LXC.pm | 9 +++--
src/PVE/LXC.pm | 20
2 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index a56c441..aa11fc6 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/
This currently breaks the tests.
The accompanying test case fixes happen automated via shell commands
in the next commit.
Signed-off-by: Wolfgang Bumiller
---
src/test/MigrationTest/QmMock.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/test/MigrationTest/QmMock.
Signed-off-by: Wolfgang Bumiller
---
src/test/run_qemu_img_convert_tests.pl | 53 ++
1 file changed, 38 insertions(+), 15 deletions(-)
diff --git a/src/test/run_qemu_img_convert_tests.pl
b/src/test/run_qemu_img_convert_tests.pl
index 2acbbef4..cfb1586f 100755
--- a/src/t
Signed-off-by: Wolfgang Bumiller
---
src/test/list_volumes_test.pm | 73 --
src/test/parse_volname_test.pm | 34 --
src/test/path_to_volume_id_test.pm | 27 +++
src/test/run_test_lvmplugin.pl | 11 +++--
src/test/run_test_zfspoolplugin.
Signed-off-by: Wolfgang Bumiller
---
src/PVE/Storage/BTRFSPlugin.pm | 11 +-
src/PVE/Storage/LVMPlugin.pm | 12 ++-
src/PVE/Storage/LvmThinPlugin.pm | 62
src/PVE/Storage/Plugin.pm| 12 ++-
src/PVE/Storage/RBDPlugin.pm | 13 +--
s
Signed-off-by: Wolfgang Bumiller
---
src/PVE/Storage/BTRFSPlugin.pm | 12 ++--
src/PVE/Storage/ESXiPlugin.pm| 2 +-
src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
src/PVE/Storage/ISCSIPlugin.pm | 2 +-
src/PVE/Storage/LVMPlugin.pm | 2 +-
src/PVE/Storage/LvmT
1 - 100 of 166 matches
Mail list logo