Re: [pve-devel] [RFC container/firewall/manager/proxmox-firewall/qemu-server 00/37] proxmox firewall nftables implementation

2024-04-10 Thread Stefan Hanreich
On 4/10/24 12:25, Lukas Wagner wrote: > Did a relatively shallow review of the Rust parts, digging deeper only into > a smaller subset of the code. > Some aspects where I see room for improvement are mostly documentation, > as Max already mentioned, and some more automated testing. I think it

[pve-devel] applied-series: [PATCH stable-7 qemu 1/2] update patches and submodule to QEMU stable 7.2.10

2024-04-10 Thread Thomas Lamprecht
Am 10/04/2024 um 15:13 schrieb Fiona Ebner: > Many stable fixes came in since the last bump, a few of which were > actually already present. Notable ones not yet present include a few > guest-triggerable assert fixes, some AHCI/IDE fixes (including the fix > for bug #2784), TGC fixes for i386 and

[pve-devel] applied: [PATCH qemu-server v2 1/2] fix #5363: cloudinit: make creation of scsi cloudinit discs possible again

2024-04-10 Thread Thomas Lamprecht
Am 10/04/2024 um 13:17 schrieb Hannes Duerr: > Upon obtaining the device type, a check is performed to determine if it > is a CD drive. It is important to note that Cloudinit drives are always > assigned as CD drives. If the drive has not yet been allocated, the test > will fail due to the unset

[pve-devel] applied: [PATCH kernel] add apparmor patch to fix recvmsg returning EINVAL

2024-04-10 Thread Thomas Lamprecht
Am 10/04/2024 um 14:17 schrieb Wolfgang Bumiller: > With apparmor 4, when recvmsg() calls are checked by the apparmor LSM > they will always return EINVAL. > This causes very weird issues when apparmor profiles are in use, and a > lot of networking issues in containers (which are always using >

[pve-devel] [PATCH stable-7 qemu 2/2] pick up some extra fixes from upcoming 7.2.11

2024-04-10 Thread Fiona Ebner
In particular, the i386 patches fix an issue that was newly introduced in 7.2.10 and the LSI patches improve the reentrancy fix. The others also sounded relevant and nice to have. Signed-off-by: Fiona Ebner --- ...lign-exposed-ID-registers-with-Linux.patch | 273 ++

[pve-devel] [PATCH qemu-server 5/6] start: handle pool limits

2024-04-10 Thread Fabian Grünbichler
if the start is not part of an incoming migration, check the VM against its pool's run limit. Signed-off-by: Fabian Grünbichler --- PVE/QemuServer.pm | 13 + 1 file changed, 13 insertions(+) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 4acf2fe1..de566502 100644 ---

[pve-devel] [PATCH qemu-server 4/6] update/hotplug: handle pool limits

2024-04-10 Thread Fabian Grünbichler
if the new value is higher than the old one, check against limits. if the old one is higher, then the change is always okay, to support reducing the usage in steps spread over multiple guests.. Signed-off-by: Fabian Grünbichler --- PVE/API2/Qemu.pm | 22 ++

[pve-devel] [PATCH qemu-server 1/6] config: add pool usage helper

2024-04-10 Thread Fabian Grünbichler
determining the usage values for the current config. pending values are taken into account if they are higher than the current value only, else it would be possible to easily circumvent config limits by setting non-hotpluggable pending values. Signed-off-by: Fabian Grünbichler ---

[pve-devel] [PATCH manager 1/4] api: pools: add limits management

2024-04-10 Thread Fabian Grünbichler
allow to set/update limits, and return them when querying individual pools. Signed-off-by: Fabian Grünbichler --- Notes: requires bumped pve-access-control PVE/API2/Pool.pm | 36 1 file changed, 32 insertions(+), 4 deletions(-) diff --git

[pve-devel] [PATCH guest-common 1/1] helpers: add pool limit/usage helpers

2024-04-10 Thread Fabian Grünbichler
one for combining the per-node broadcasted values, one for checking a pool's limit, and one specific helper for checking guest-related actions such as starting a VM. Signed-off-by: Fabian Grünbichler --- src/PVE/GuestHelpers.pm | 190 1 file changed, 190

[pve-devel] [PATCH container 6/7] rollback: handle pool limits

2024-04-10 Thread Fabian Grünbichler
by checking the snapshot conf values as if the CT was newly created. Signed-off-by: Fabian Grünbichler --- src/PVE/API2/LXC/Snapshot.pm | 7 +++ 1 file changed, 7 insertions(+) diff --git a/src/PVE/API2/LXC/Snapshot.pm b/src/PVE/API2/LXC/Snapshot.pm index 0999fbc..37a02a6 100644 ---

[pve-devel] [PATCH container 2/7] status: add pool usage fields

2024-04-10 Thread Fabian Grünbichler
these are similar to existing ones, but with slightly different semantics. Signed-off-by: Fabian Grünbichler --- src/PVE/LXC.pm | 29 + 1 file changed, 29 insertions(+) diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm index 88a9d6f..78c0e18 100644 --- a/src/PVE/LXC.pm

[pve-devel] [PATCH access-control 1/1] pools: define resource limits

2024-04-10 Thread Fabian Grünbichler
and handle them when parsing/writing user.cfg Signed-off-by: Fabian Grünbichler --- src/PVE/AccessControl.pm | 42 +-- src/test/parser_writer.pl | 14 ++--- 2 files changed, 47 insertions(+), 9 deletions(-) diff --git a/src/PVE/AccessControl.pm

[pve-devel] [RFC qemu-server/pve-container/.. 0/19] pool resource limits

2024-04-10 Thread Fabian Grünbichler
high level description: VM/CT vmstatus returns new fields for configured and running "usage" values, these are then broadcasted by pvestatd on each node via KV. helpers in guest-common to check those limits pool API returns limits and usage, them and allows setting the limits

[pve-devel] [PATCH qemu-server 6/6] rollback: handle pool limits

2024-04-10 Thread Fabian Grünbichler
by checking teh snapshot conf values as if the VM was newly created. Signed-off-by: Fabian Grünbichler --- PVE/API2/Qemu.pm | 8 1 file changed, 8 insertions(+) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 6f104faa..657c9cb8 100644 --- a/PVE/API2/Qemu.pm +++

[pve-devel] [PATCH qemu-server 3/6] create/restore/clone: handle pool limits

2024-04-10 Thread Fabian Grünbichler
as early as possible, to avoid having to undo expensive work or allowing a window for limit exhaustion.. Signed-off-by: Fabian Grünbichler --- PVE/API2/Qemu.pm | 24 1 file changed, 24 insertions(+) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index

[pve-devel] [PATCH qemu-server 2/6] vmstatus: add usage values for pool limits

2024-04-10 Thread Fabian Grünbichler
these are separate from the existing ones to allow changes on either end without side-effects, since the semantics are not quite the same. the conf values incorporate pending values (if higher than the current config value), and avoid clamping. the run values are currently identical to the

[pve-devel] [PATCH manager 4/4] ui: add pool limits and usage

2024-04-10 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler --- Notes: this is very "bare", obviously we'd want - a nicer grid/.. display of usage - a way to edit the limits I am not yet sure how to integrate this nicely, and wanted to get feedback on the rest first.

[pve-devel] [PATCH manager 3/4] api: return pool usage when queried

2024-04-10 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler --- PVE/API2/Pool.pm | 19 +-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/PVE/API2/Pool.pm b/PVE/API2/Pool.pm index 26ff7742e..9c232a971 100644 --- a/PVE/API2/Pool.pm +++ b/PVE/API2/Pool.pm @@ -6,6 +6,7 @@ use warnings; use

[pve-devel] [PATCH manager 2/4] pvestatd: collect and broadcast pool usage

2024-04-10 Thread Fabian Grünbichler
so that other nodes can query it and both block changes that would violate the limits, and mark pools which are violating it currently accordingly. Signed-off-by: Fabian Grünbichler --- PVE/Service/pvestatd.pm | 59 ++--- 1 file changed, 55 insertions(+), 4

[pve-devel] [PATCH container 7/7] update: handle pool limits

2024-04-10 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler --- src/PVE/API2/LXC/Config.pm | 21 + 1 file changed, 21 insertions(+) diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm index e6c0980..3fb3885 100644 --- a/src/PVE/API2/LXC/Config.pm +++ b/src/PVE/API2/LXC/Config.pm @@

[pve-devel] [PATCH container 5/7] hotplug: handle pool limits

2024-04-10 Thread Fabian Grünbichler
by checking the new values against the running limits. Signed-off-by: Fabian Grünbichler --- src/PVE/LXC/Config.pm | 13 + 1 file changed, 13 insertions(+) diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm index 908d64a..787bcf2 100644 --- a/src/PVE/LXC/Config.pm +++

[pve-devel] [PATCH container 4/7] start: handle pool limits

2024-04-10 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler --- src/PVE/LXC.pm | 8 1 file changed, 8 insertions(+) diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm index 78c0e18..08f4425 100644 --- a/src/PVE/LXC.pm +++ b/src/PVE/LXC.pm @@ -2586,6 +2586,14 @@ sub vm_start { update_lxc_config($vmid, $conf);

[pve-devel] [PATCH container 3/7] create/restore/clone: handle pool limits

2024-04-10 Thread Fabian Grünbichler
early if possible, to avoid big cleanups cause of limit exhaustion. Signed-off-by: Fabian Grünbichler --- src/PVE/API2/LXC.pm | 25 + 1 file changed, 25 insertions(+) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index fd42ccf..80bac3d 100644 ---

[pve-devel] [PATCH container 1/7] config: add pool usage helper

2024-04-10 Thread Fabian Grünbichler
to avoid repeating those calculations all over the place. Signed-off-by: Fabian Grünbichler --- src/PVE/LXC/Config.pm | 35 +++ 1 file changed, 35 insertions(+) diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm index 5ac1446..908d64a 100644 ---

[pve-devel] [PATCH kernel] add apparmor patch to fix recvmsg returning EINVAL

2024-04-10 Thread Wolfgang Bumiller
With apparmor 4, when recvmsg() calls are checked by the apparmor LSM they will always return EINVAL. This causes very weird issues when apparmor profiles are in use, and a lot of networking issues in containers (which are always using apparmor). When coming from sys_recvmsg, msg->msg_namelen is

Re: [pve-devel] [PATCH v5 pve-storage, pve-manager 00/11] Fix #4759: Configure Permissions for ceph-crash.service

2024-04-10 Thread Friedrich Weber
On 02/04/2024 16:55, Max Carrara wrote: > Fix #4759: Configure Permissions for ceph-crash.service - Version 5 > === Thanks for the v4! Consider this Tested-by: Friedrich Weber Details: - like Maximiliano, removed the version

Re: [pve-devel] [PATCH qemu v2 07/21] PVE backup: add fleecing option

2024-04-10 Thread Wolfgang Bumiller
On Wed, Apr 10, 2024 at 11:30:59AM +0200, Fiona Ebner wrote: > Am 08.04.24 um 14:45 schrieb Wolfgang Bumiller: > > On Fri, Mar 15, 2024 at 11:24:48AM +0100, Fiona Ebner wrote: > >> @@ -581,6 +682,14 @@ static void create_backup_jobs_bh(void *opaque) { > >> aio_co_enter(data->ctx, data->co); >

Re: [pve-devel] [PATCH manager v2 13/21] api: backup/vzdump: add permission check for fleecing storage

2024-04-10 Thread Wolfgang Bumiller
On Wed, Apr 10, 2024 at 11:57:37AM +0200, Fiona Ebner wrote: > Am 08.04.24 um 10:47 schrieb Wolfgang Bumiller: > > On Fri, Mar 15, 2024 at 11:24:54AM +0100, Fiona Ebner wrote: > >> @@ -52,6 +52,12 @@ sub assert_param_permission_common { > >> if (grep { defined($param->{$_}) } qw(bwlimit

Re: [pve-devel] [PATCH qemu-server 1/1] fix #5365: drive: add drive_is_cloudinit check to get_scsi_devicetype

2024-04-10 Thread Hannes Dürr
On 4/10/24 11:34, Thomas Lamprecht wrote: This is not bug #5365 [0] (which is about a ceph device class UX improvement) but #5363 [0]. [0]: https://bugzilla.proxmox.com/show_bug.cgi?id=5365 [1]: https://bugzilla.proxmox.com/show_bug.cgi?id=5363 Good catch, thank you ! I mostly noticed

[pve-devel] [PATCH qemu-server v2 2/2] drive: improve readability to get_scsi_device_type

2024-04-10 Thread Hannes Duerr
Signed-off-by: Hannes Duerr --- PVE/API2/Qemu.pm| 2 +- PVE/QemuServer.pm | 2 +- PVE/QemuServer/Drive.pm | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 497987f..dc44dee 100644 --- a/PVE/API2/Qemu.pm +++

[pve-devel] [PATCH qemu-server v2 1/2] fix #5363: cloudinit: make creation of scsi cloudinit discs possible again

2024-04-10 Thread Hannes Duerr
Upon obtaining the device type, a check is performed to determine if it is a CD drive. It is important to note that Cloudinit drives are always assigned as CD drives. If the drive has not yet been allocated, the test will fail due to the unset cd attribute. To avoid this, an explicit check is now

[pve-devel] [PATCH docs v2 1/2] qm: resource mapping: add description for `mdev` option

2024-04-10 Thread Dominik Csapak
in a new section about additional options Signed-off-by: Dominik Csapak --- qm.adoc | 13 + 1 file changed, 13 insertions(+) diff --git a/qm.adoc b/qm.adoc index 1170dd1..c146ce9 100644 --- a/qm.adoc +++ b/qm.adoc @@ -1734,6 +1734,19 @@ To create mappings `Mapping.Modify` on

[pve-devel] [PATCH manager v2 5/5] fix #5175: ui: allow configuring and live migration of mapped pci resources

2024-04-10 Thread Dominik Csapak
if the hardware/driver is capable, the admin can now mark a pci device as 'live-migration-capable', which then tries enabling live migration for such devices. mark it as experimental when configuring and in the migrate window Signed-off-by: Dominik Csapak --- www/manager6/window/Migrate.js

[pve-devel] [PATCH qemu-server v2 08/10] check_local_resources: add more info per mapped device and return as hash

2024-04-10 Thread Dominik Csapak
such as the mapping name and if it's marked for live-migration (pci only) Signed-off-by: Dominik Csapak --- PVE/API2/Qemu.pm | 2 +- PVE/QemuMigrate.pm | 7 --- PVE/QemuServer.pm | 17 ++--- 3 files changed, 15 insertions(+), 11 deletions(-) diff --git a/PVE/API2/Qemu.pm

[pve-devel] [PATCH qemu-server v2 09/10] api: enable live migration for marked mapped pci devices

2024-04-10 Thread Dominik Csapak
They have to be marked as 'live-migration-capable' in the mapping config, and the driver and qemu must support it. For the gui checks, we now return the whole object of the mapped resources, which includes info like the name and if it's marked as live-migration capable. (while deprecating the old

[pve-devel] [PATCH manager v2 1/5] mapping: pci: include mdev in config checks

2024-04-10 Thread Dominik Csapak
by also providing the global config in assert_valid, and by also adding the mdev config in the 'toCheck' object in the gui Signed-off-by: Dominik Csapak --- PVE/API2/Cluster/Mapping/PCI.pm | 2 +- www/manager6/dc/PCIMapView.js | 5 + 2 files changed, 6 insertions(+), 1 deletion(-) diff

[pve-devel] [PATCH docs v2 2/2] qm: resource mapping: document `live-migration-capable` setting

2024-04-10 Thread Dominik Csapak
Signed-off-by: Dominik Csapak --- qm.adoc | 6 ++ 1 file changed, 6 insertions(+) diff --git a/qm.adoc b/qm.adoc index c146ce9..c77cb7b 100644 --- a/qm.adoc +++ b/qm.adoc @@ -1746,6 +1746,12 @@ Currently there are the following options: the mapping, the mediated device will be create on

[pve-devel] [PATCH manager v2 2/5] bulk migrate: improve precondition checks

2024-04-10 Thread Dominik Csapak
this now takes into account the 'not_allowed_nodes' hash we get from the api call. With that, we can now limit the 'local_resources' check for online vms only, as for offline guests, the 'unavailable-resources' hash already includes mapped devices that don't exist on the target node. This now

[pve-devel] [PATCH qemu-server v2 02/10] pci: mapping: move implementation of find_on_current_node here

2024-04-10 Thread Dominik Csapak
this was the only user, and it's easy enough Signed-off-by: Dominik Csapak --- PVE/QemuServer/PCI.pm | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm index 1673041b..7ff9cad7 100644 --- a/PVE/QemuServer/PCI.pm +++

[pve-devel] [PATCH qemu-server v2 04/10] stop cleanup: remove unnecessary tpmstate cleanup

2024-04-10 Thread Dominik Csapak
tpmstate0 is already included in `get_vm_volumes`, and our only storage plugin that has unmap_volume implemented is the RBDPlugin, where we call unmap in `deactivate_volume`. So it's already ummapped by the `deactivate_volumes` calls above. Signed-off-by: Dominik Csapak --- PVE/QemuServer.pm |

[pve-devel] [PATCH qemu-server v2 03/10] pci: mapping: check mdev config against hardware

2024-04-10 Thread Dominik Csapak
by giving the mapping config to assert_valid, not only the specific mapping Signed-off-by: Dominik Csapak --- PVE/QemuServer/PCI.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/QemuServer/PCI.pm b/PVE/QemuServer/PCI.pm index 7ff9cad7..6ba43ee8 100644 ---

[pve-devel] [PATCH qemu-server v2 01/10] usb: mapping: move implementation of find_on_current_node here

2024-04-10 Thread Dominik Csapak
this was the only user, and it's easy enough Signed-off-by: Dominik Csapak --- PVE/QemuServer/USB.pm | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer/USB.pm b/PVE/QemuServer/USB.pm index 49957444..ecd0361d 100644 --- a/PVE/QemuServer/USB.pm +++

[pve-devel] [PATCH guest-common v2 4/5] mapping: pci: add 'live-migration-capable' flag to mappings

2024-04-10 Thread Dominik Csapak
so that we can decide in qemu-server to allow live-migration. The driver and QEMU must be capable of that, and it's the admin's responsibility to know and configure that Mark the option as experimental in the description. Signed-off-by: Dominik Csapak --- src/PVE/Mapping/PCI.pm | 8 1

[pve-devel] [PATCH qemu-server v2 10/10] api: include not mapped resources for running vms in migrate preconditions

2024-04-10 Thread Dominik Csapak
so that we can show a proper warning in the migrate dialog and check it in the bulk migrate precondition check the unavailable_storages and should be the same as before, but we now always return allowed_nodes too. also add a note that we want to redesign the return values here, to make * the api

[pve-devel] [PATCH manager v2 4/5] ui: adapt migration window to precondition api change

2024-04-10 Thread Dominik Csapak
we now return the 'allowed_nodes'/'not_allowed_nodes' also if the vm is running, when it has mapped resources. So do that checks independently so that the user has instant feedback where those resources exist. Signed-off-by: Dominik Csapak --- www/manager6/window/Migrate.js | 26

[pve-devel] [PATCH qemu-server v2 06/10] migrate: call vm_stop_cleanup after stopping in phase3_cleanup

2024-04-10 Thread Dominik Csapak
we currently only call deactivate_volumes, but we actually want to call the whole vm_stop_cleanup, since that is not invoked by the vm_stop above (we cannot parse the config anymore) and might do other cleanups we also want to do (like mdev cleanup). For this to work properly we have to clone the

[pve-devel] [PATCH manager v2 3/5] bulk migrate: include checks for live-migratable local resources

2024-04-10 Thread Dominik Csapak
those should be able to migrate even for online vms. If the mapping does not exist on the target node, that will be caught further down anyway. Signed-off-by: Dominik Csapak --- PVE/API2/Nodes.pm | 13 +++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git

[pve-devel] [PATCH qemu-server v2 05/10] vm_stop_cleanup: add noerr parameter

2024-04-10 Thread Dominik Csapak
and set it on all current users Signed-off-by: Dominik Csapak --- PVE/CLI/qm.pm | 2 +- PVE/QemuServer.pm | 13 - 2 files changed, 9 insertions(+), 6 deletions(-) diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm index b105830f..fbc590f5 100755 --- a/PVE/CLI/qm.pm +++ b/PVE/CLI/qm.pm

[pve-devel] [PATCH qemu-server v2 07/10] pci: set 'enable-migration' to on for live-migration marked mapped devices

2024-04-10 Thread Dominik Csapak
the default is 'auto', but for those which are marked as capable for live migration, we want to explicitly enable that, so we get an early error on start if the driver does not support that. Signed-off-by: Dominik Csapak --- PVE/QemuServer/PCI.pm | 9 - 1 file changed, 8 insertions(+),

[pve-devel] [PATCH guest-common/qemu-server/manager/docs v2] implement experimental vgpu live migration

2024-04-10 Thread Dominik Csapak
and some useful cleanups this series replaces both the initial pci live migration and the fixup series[0][1] This is implemented for mapped resources. This requires driver and hardware support, but aside from nvidia vgpus there don't seem to be many drivers (if any) that do support that. qemu

[pve-devel] [PATCH guest-common v2 5/5] mapping: remove find_on_current_node

2024-04-10 Thread Dominik Csapak
they only have one user each (where we can inline the implementation). It's easy enough to recreate should we need to. Signed-off-by: Dominik Csapak --- src/PVE/Mapping/PCI.pm | 10 -- src/PVE/Mapping/USB.pm | 9 - 2 files changed, 19 deletions(-) diff --git

[pve-devel] [PATCH guest-common v2 3/5] mapping: pci: check the mdev configuration on the device too

2024-04-10 Thread Dominik Csapak
but that lives int he 'global' part of the mapping config, not in a specific mapping. To check that, add a new (optional) parameter to assert_valid that includes said global config. by making that check optional, we don't break older users of that function. Signed-off-by: Dominik Csapak ---

[pve-devel] [PATCH guest-common v2 2/5] mapping: pci: rework properties check

2024-04-10 Thread Dominik Csapak
refactors the actual checking out to its own sub, so we can reuse it later Signed-off-by: Dominik Csapak --- src/PVE/Mapping/PCI.pm | 43 +- 1 file changed, 26 insertions(+), 17 deletions(-) diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm

[pve-devel] [PATCH guest-common v2 1/5] mapping: pci: fix missing description/default for mdev

2024-04-10 Thread Dominik Csapak
Signed-off-by: Dominik Csapak --- src/PVE/Mapping/PCI.pm | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm index 19ace98..725e106 100644 --- a/src/PVE/Mapping/PCI.pm +++ b/src/PVE/Mapping/PCI.pm @@ -100,8 +100,10 @@ my $defaultData = {

Re: [pve-devel] [PATCH qemu-server 3/3] api: include not mapped resources for running vms in migrate preconditions

2024-04-10 Thread Fiona Ebner
Am 02.04.24 um 11:39 schrieb Dominik Csapak: > On 3/22/24 17:19, Fiona Ebner wrote: >> Am 20.03.24 um 13:51 schrieb Dominik Csapak: >>> so that we can show a proper warning in the migrate dialog and check it >>> in the bulk migrate precondition check >>> >>> the unavailable_storages and

Re: [pve-devel] [RFC container/firewall/manager/proxmox-firewall/qemu-server 00/37] proxmox firewall nftables implementation

2024-04-10 Thread Lukas Wagner
On 2024-04-02 19:15, Stefan Hanreich wrote: > ## Introduction > This RFC provides a drop-in replacement for the current pve-firewall package > that is based on Rust and nftables. > > It consists of three crates: > * proxmox-ve-config > for parsing firewall and guest configuration files, as

[pve-devel] applied-series: [PATCH widget-toolkit 0/2] notification: set 'Remove' button text to 'Reset to default' for built-ins

2024-04-10 Thread Thomas Lamprecht
Am 14/12/2023 um 10:48 schrieb Lukas Wagner: > Deleting a built-in target/matcher does not remove it, but resets it > to its default settings. This was not really obvious from the UI. > This patch changes the 'Remove' button text based on the > selected target/matcher. If it is a built-in, the

[pve-devel] applied: [PATCH widget-toolkit v2] i18n: mark strings as translatable

2024-04-10 Thread Thomas Lamprecht
Am 07/12/2023 um 09:18 schrieb Maximiliano Sandoval: > Note that N/A is already translatable in other places. > > Signed-off-by: Maximiliano Sandoval > --- > Differences from v2: > - Translate the invalid subscription key message, this string is also in > two more places in pve-manager. This

Re: [pve-devel] [PATCH guest-common 1/2] mapping: pci: add 'live-migration-capable' flag to mappings

2024-04-10 Thread Fiona Ebner
Am 02.04.24 um 11:30 schrieb Dominik Csapak: >>> diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm >>> index 19ace98..0866175 100644 >>> --- a/src/PVE/Mapping/PCI.pm >>> +++ b/src/PVE/Mapping/PCI.pm >>> @@ -100,8 +100,16 @@ my $defaultData = { >>>   maxLength => 4096, >>>  

Re: [pve-devel] [PATCH manager v2 13/21] api: backup/vzdump: add permission check for fleecing storage

2024-04-10 Thread Fiona Ebner
Am 08.04.24 um 10:47 schrieb Wolfgang Bumiller: > On Fri, Mar 15, 2024 at 11:24:54AM +0100, Fiona Ebner wrote: >> @@ -52,6 +52,12 @@ sub assert_param_permission_common { >> if (grep { defined($param->{$_}) } qw(bwlimit ionice performance)) { >> $rpcenv->check($user, "/", [ 'Sys.Modify'

Re: [pve-devel] [PATCH qemu-server 1/1] fix #5365: drive: add drive_is_cloudinit check to get_scsi_devicetype

2024-04-10 Thread Thomas Lamprecht
This is not bug #5365 [0] (which is about a ceph device class UX improvement) but #5363 [0]. [0]: https://bugzilla.proxmox.com/show_bug.cgi?id=5365 [1]: https://bugzilla.proxmox.com/show_bug.cgi?id=5363 I mostly noticed because I had too loo what this is actually about, IMO the subject could be

Re: [pve-devel] [PATCH qemu v2 07/21] PVE backup: add fleecing option

2024-04-10 Thread Fiona Ebner
Am 08.04.24 um 14:45 schrieb Wolfgang Bumiller: > On Fri, Mar 15, 2024 at 11:24:48AM +0100, Fiona Ebner wrote: >> @@ -581,6 +682,14 @@ static void create_backup_jobs_bh(void *opaque) { >> aio_co_enter(data->ctx, data->co); >> } >> >> +/* >> + * EFI disk and TPM state are small and it's

Re: [pve-devel] [PATCH pve-storage v4 2/2] fix #1611: implement import of base-images for LVM-thin Storage

2024-04-10 Thread Hannes Dürr
On 1/30/24 11:00, Fabian Grünbichler wrote: On December 19, 2023 3:03 pm, Hannes Duerr wrote: for base images we call the volume_import of the parent plugin and pass it as vm-image instead of base-image, then convert it back as base-image Signed-off-by: Hannes Duerr ---

[pve-devel] applied: [PATCH widget-toolkit v4] window: edit: avoid sharing custom config objects between subclasses

2024-04-10 Thread Thomas Lamprecht
Am 09/04/2024 um 10:16 schrieb Friedrich Weber: > Currently, `Proxmox.window.Edit` initializes `extraRequestParams` and > `submitOptions` to two objects that, if not overwritten, are shared > between all instances of subclasses. This bears the danger of > modifying the shared object in a subclass

[pve-devel] applied: [PATCH widget-toolkit] dark-mode: set intentionally black icons to `$icon-color`

2024-04-10 Thread Thomas Lamprecht
Am 16/10/2023 um 18:28 schrieb Stefan Sterz: > some icons intentionally use black as their color in the light theme. > this includes the little pencil and check mark icon in the acme > overview. change their color to the regular dark-mode icon-color. for > this to work the filter inversion needed

Re: [pve-devel] [PATCH pve-storage] esxi: add mapping for windows server 2016/2019

2024-04-10 Thread Thomas Lamprecht
Am 09/04/2024 um 12:56 schrieb Stefan Sterz: > previously these were mapped to the linux 2.6 default > > Signed-off-by: Stefan Sterz > --- > src/PVE/Storage/ESXiPlugin.pm | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm >

Re: [pve-devel] [PATCH widget-toolkit] dark-mode: set intentionally black icons to `$icon-color`

2024-04-10 Thread Stefan Sterz
On Mon Oct 16, 2023 at 6:28 PM CEST, Stefan Sterz wrote: > some icons intentionally use black as their color in the light theme. > this includes the little pencil and check mark icon in the acme > overview. change their color to the regular dark-mode icon-color. for > this to work the filter