[pve-devel] applied-series: partially-applied: [PATCH many v9 00/13] notifications: notification metadata matching improvements

2024-09-23 Thread Thomas Lamprecht
Am 23/09/2024 um 13:27 schrieb Lukas Wagner:
> pve-manager has been bumped in the meanwhile, I guess we could now merge the
> remaining patches for pve-docs and proxmox-widget-toolkit?
> They still apply cleanly and a quick test also showed that everything still
> works as expected.

thanks for the reminder, applied the remaining docs and wtk patches now.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Bug#1082398: libtemplate-perl: please upgrade to newer upstream release

2024-09-20 Thread Thomas Lamprecht
I only saw after reporting this that another user was faster, so
this is a duplication of #1081785 [0] and can be closed, sorry for
the noise!

[0]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1081785

- Thomas



Bug#1082398: libtemplate-perl: please upgrade to newer upstream release

2024-09-20 Thread Thomas Lamprecht
Package: libtemplate-perl
Version: 2.27-1
Severity: wishlist
X-Debbugs-Cc: Benj. Mako Hill 

Dear Maintainer,

As talked a bit over a week ago I'm filing a wishlist request for a
newer Template Toolkit package. Some background for the record:

Currently 2.27-1 is in the archive for all releases from old-old-stable
up to unstable, upstream released that version in 2016. While it seems
that this has not caused any compatibility fallout for an impressive
amount of time, showing how stable this ecosystem got, that seems to
slowly change nowadays.
E.g. the recently bumped Bugzilla version 5.2 requires
at least version 3.008 and it's likely that other dependees that are
still actively developed will also require a newer version sooner or
later.
If the update would be done now it would quite likely make the cut for
Debian Trixie next year and it would make backporting it to bookworm
slightly easier (e.g. for us) too.

thanks for your consideration,
Thomas



[pve-devel] applied-series: [PATCH manager 1/2] vzdump jobs: make job ID a standard option

2024-09-20 Thread Thomas Lamprecht
Am 20/09/2024 um 11:05 schrieb Fabian Grünbichler:
> and put it into PVE::VZDump because there is a cycle between
> 
> PVE::Jobs::VZDump, PVE::API2::VZDump and PVE::API2::Backups
> 
> that prevents any of those containing it for now.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
>  PVE/API2/Backup.pm | 18 ++
>  PVE/VZDump.pm  | 10 ++
>  2 files changed, 16 insertions(+), 12 deletions(-)
> 
>

applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] ui: backup job overview: add filter field

2024-09-20 Thread Thomas Lamprecht
Am 20/09/2024 um 08:09 schrieb Dominik Csapak:
>> For example, all VMIDs on a node. all VMIDs except the selected. VMIDs in 
>> resource pools, ...
> Fair, as that would "really" fix the bug. Just for the record, I did decide
> for a 'front-end' only approach because,
> 
> * less api calls/parameters to maintain
> * it probably solves most use cases (and searching inside pools/etc. could 
> also be done here)
> 
> If we'd want to have a more comprehensive vmid <-> backup job lookup,
> i'd probably put it into the VM backup panel itself, or maybe as an addition 
> to this filter
> as an explicit way to search for a vmid
> 
> no hard feelings either way though 🙂

FWIW, with the general job system it could be also an option to make a
generic job filter API where each job plugin can implement a method to
decide what/how to search for.

That way we could also create a "show backup jobs that guest is included
in" at the virtual guest's backup panel, or even a notice that it isn't
included in any (could be also done as a separate guest "health check"
window to avoid the cost of querying this every time the guest is visited
in the UI).

Anyhow, as while I understand why you went for this approach, it surely is
simpler and quicker to implement, I think that users could be confused by
not seeing a job for a VMID because it's not explicitly listed.
Especially once we get a tag-based selection, which would be great to have
soonish, I think that manual VMID selection will become less popular that
it is now already, as it's easier to re-use existing groupings compared to
having to update jobs on every guest creation/destruction.

One possibility could be to add this without VMIDs for now, albeit that's
naturally not ideal as the prime use case is checking in which job, if at
all, a VMID is included.

In summary, I think it's better to go for a "complete" solution here, even
if it means that we have to add some new API endpoints, if an evaluation
shows that the generic job-filter API is feasible, adding that endpoint
might give us a quite good ROI there though. If it doesn't seems to be
easily possible we still could go for a purely frontend based solution.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse

2024-09-19 Thread Thomas Lamprecht
Am 19/09/2024 um 14:45 schrieb Dominik Csapak:
> On 9/19/24 14:01, Thomas Lamprecht wrote:
>> Am 19/09/2024 um 11:52 schrieb Dominik Csapak:
>>> by default libfuse2 limits writes to 4k size, which means that on writes
>>> bigger than that, we do a whole write cycle for each 4k block that comes
>>> in. To avoid that, add the option 'big_writes' to allow writes bigger
>>> than 4k at once.
>>>
>>> This should improve pmxcfs performance for situations where we often
>>> write large files (e.g. big ha status) and maybe reduce writes to disk.
>>
>> Should? Something like before/after for benchmark numbers, flamegraphs
>> would be really good to have, without those it's rather hard to discuss
>> this, and I'd like to avoid having to do those, or check the inner workings
>> of the affected fuse userspace/kernel code paths here myself.
> 
> well I mean the code change is relatively small and the result is rather 
> clear:

Well sure the code change is just setting an option... But the actual change is
abstracted away and would benefit from actually looking into..

> in the current case we have the following calls from pmxcfs (shortened for 
> e-mail)
> when writing a single 128k block:
> (dd if=... of=/etc/pve/test bs=128k count=1)

Better than nothing but still no actual numbers (reduced time, reduced write amp
in combination with sqlite, ...), some basic analysis over file/write size 
distribution
on a single node and (e.g. three node) cluster, ...
If that's all obvious for you then great, but as already mentioned in the past, 
I
want actual data in commit messages for such stuff, and I cannot really see a 
downside
of having such numbers.

Again, as is I'm not really seeing what's to discuss, you send it as RFC after
all.

> [...] 
> so a factor of 32 less calls to cfs_fuse_write (including memdb_pwrite)

That can be huge or not so big at all, i.e. as mentioned above, it would we 
good to
measure the impact through some other metrics.

And FWIW, I used bpftrace to count [0] with an unpatched pmxcfs, there I get
the 32 calls to cfs_fuse_write only for a new file, overwriting the existing
file again with the same amount of data (128k) just causes a single call.
I tried using more data (e.g. from 128k initially to 256k or 512k) and it's
always the data divided by 128k (even if the first file has a different size)

We do not override existing files often, but rather write to a new file and
then rename, but still quite interesting and IMO really showing that just
because this is 1 +-1 line change it doesn't necessarily have to be trivial
and obvious in its effects.

[0]: bpftrace -e 'u:cfs_fuse_write /str(args->path) == "/test"/ {@ = count();} 
END { print(@) }' -p "$(pidof pmxcfs)"


>>> If we'd change to libfuse3, this would be a non-issue, since that option
>>> got removed and is the default there.
>>
>> I'd prefer that. At least if done with the future PVE 9.0, as I do not think
>> it's a good idea in the middle of a stable release cycle.
> 
> why not this change now, and the rewrite to libfuse3 later? that way we can
> have some improvements now too...

Because I want some actual data and reasoning first, even if it's quite likely
that this improves things Somehow™, I'd like to actually know in what metrics
and by how much (even if just an upper bound due to the benchmark or some
measurement being rather artificial).

I mean you name the big HA status, why not measure that for real? like, probably
among other things, in terms of bytes hitting the block layer, i.e. the actual
backing disk from those requests as then we'd know for real if this can reduce
the write load there, not just that it maybe should.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC PATCH pve-cluster] fix #5728: pmxcfs: allow bigger writes than 4k for fuse

2024-09-19 Thread Thomas Lamprecht
Am 19/09/2024 um 11:52 schrieb Dominik Csapak:
> by default libfuse2 limits writes to 4k size, which means that on writes
> bigger than that, we do a whole write cycle for each 4k block that comes
> in. To avoid that, add the option 'big_writes' to allow writes bigger
> than 4k at once.
> 
> This should improve pmxcfs performance for situations where we often
> write large files (e.g. big ha status) and maybe reduce writes to disk.

Should? Something like before/after for benchmark numbers, flamegraphs
would be really good to have, without those it's rather hard to discuss
this, and I'd like to avoid having to do those, or check the inner workings
of the affected fuse userspace/kernel code paths here myself.

> 
> If we'd change to libfuse3, this would be a non-issue, since that option
> got removed and is the default there.

I'd prefer that. At least if done with the future PVE 9.0, as I do not think
it's a good idea in the middle of a stable release cycle.

FWIW, some Debian Devs also want to push the fuse 2 to 3 migration forward
[0]. So, sooner or later we have to switch over pmxcfs anyway, and there's
already a POC from Fabian [1].

[0]: https://lists.debian.org/debian-devel/2024/08/msg00177.html
[1]: 
https://lore.proxmox.com/pve-devel/20220715074742.3387301-1-f.gruenbich...@proxmox.com/




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC qemu/storage/qemu-server/container/manager v2 00/25] backup provider API

2024-09-12 Thread Thomas Lamprecht
Am 12/09/2024 um 14:43 schrieb Fabian Grünbichler:
> I do wonder whether we want to support the Borg and Example plugins
> though? if not, it might make sense to not ship them (but maybe just
> test them?)..

FWIW, we could move them to a separate repository named something like
"pve-storage-plugin-examples" which could also host one or two examples
for the storage plugins (like SSHFS). We can then still decide to actually
support it as, e.g., opt-in package(s) built from that repo.

> there's a pretty tight coupling between storage plugin and backup
> provider plugin - that might lead to some complaints (e.g., I can
> imaging quite a few backup providers that just require some local file
> system for temp storage, and users wondering why they can't just enable
> that for an existing dir storage). it does make some things easier
> though, so I am not sure we need to change that, just wanted to draw
> attention to it.

FWIW, if we get feedback and can extract a base for a common use case we
should be able to do something like addinmg a new perl module implementing
most for that use case, on which such providers can then base their
implementation on.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Announcing a Public Inbox instance for the Proxmox mailing lists

2024-09-12 Thread Thomas Lamprecht
Hello everyone,

For a few weeks now, we have been running https://lore.proxmox.com/

See the new section in our developer docs wiki for a little bit of
additional details:
https://pve.proxmox.com/wiki/Developer_Documentation#Public_Inbox

Best regards,
 Thomas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[PVE-User] Announcing a Public Inbox instance for the Proxmox mailing lists

2024-09-12 Thread Thomas Lamprecht
Hello everyone,

For a few weeks now, we have been running https://lore.proxmox.com/

See the new section in our developer docs wiki for a little bit of
additional details:
https://pve.proxmox.com/wiki/Developer_Documentation#Public_Inbox

Best regards,
 Thomas


___
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user



[pve-devel] applied: [PATCH storage] api: upload: correctly test for result of unlink

2024-09-12 Thread Thomas Lamprecht
Am 29/07/2024 um 16:29 schrieb Fiona Ebner:
> It's not enough to check whether $! is set. From "perldoc perlvar":
> 
>> Many system or library calls set "errno" if they fail, to
>> indicate the cause of failure. They usually do not set "errno"
>> to zero if they succeed and may set "errno" to a non-zero value
>> on success. This means "errno", hence $!, is meaningful only
>> *immediately* after a failure:
> 
> To protect against potential issues, check the return value of unlink
> and only check $! if it failed.
> 
> Signed-off-by: Fiona Ebner 
> ---
>  src/PVE/API2/Storage/Status.pm | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH container] api: autocomplete rootdir storages for create, move-volume and clone

2024-09-10 Thread Thomas Lamprecht
Am 10/09/2024 um 13:23 schrieb Daniel Kral:
> Adds a helper subroutine for enumerating storages that are enabled and
> have the content type `rootdir` set, therefore supporting container
> directories.
> 
> The autocompletion is added to the clone command and changed for the
> create and move-volume commands, which previously suggested any storage
> device. This is misleading as these commands will fail for any storage
> that is not configured to store container directories.
> 
> Signed-off-by: Daniel Kral 
> ---
> I have already noticed that the create command has the default value
> 'local' for the storage parameter, which could fail on any installation
> where the local storage is not configured to store rootdirs. This should
> be fixed in a separate patch in my opinion.

good call

>  src/PVE/API2/LXC.pm |  5 +++--
>  src/PVE/LXC.pm  | 14 ++
>  2 files changed, 17 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH container] backup: warn that tar does not honor exclusion pattern with a trailing slash

2024-09-10 Thread Thomas Lamprecht
Am 31/05/2024 um 12:07 schrieb Fiona Ebner:
> As reported in the community forum [0], for tar, an exclusion pattern
> with a trailing slash will not match a folder with that name. For
> rsync and proxmox-backup-client however, such a pattern will exclude
> a directory with that name, but not a file.
> 
> rsync is used for 'suspend' mode backup and tar is used for all
> non-PBS backups to create the archive. So currently, in the presence
> of an exclusion pattern with a trailing slash, there is inconsistency
> between different backup modes (because for 'suspend' mode, rsync will
> already do the exclusion too) as well as between PBS and non-PBS
> backups.
> 
> There doesn't seem to be a straight-forward way to align the behavior
> for tar with command-line options exactly. The trailing slash can't be
> removed from the pattern, because that would also match files.
> Matching with
>> some/pattern/*
>> some/pattern/.*
> rather than
>> some/pattern/
> gets pretty close, which was suggested by Dominik. Just the empty
> directory is still included.
> 
> In any case, modifying current behavior would be a breaking change, so
> actually aligning the exclusion (more closely) is better done in the
> next major release.
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> One could argue such a change is a bug-fix and so does not need to
> wait until the next major release. I opted for the safer variant for
> now, but happy to go with the aligning already if that is preferred.

a major release may indeed better for conveying such a change and it
seems not _that_ frequent, so fine by me to wait.

> 
>  src/PVE/VZDump/LXC.pm | 11 ++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH container v3 1/2] add deny-write option for device passthrough

2024-09-10 Thread Thomas Lamprecht
Am 09/09/2024 um 14:50 schrieb Filip Schauer:
> Add the deny-write options for device passthrough, to restrict container
> access to devices.
> 
> Signed-off-by: Filip Schauer 
> ---
>  src/PVE/LXC.pm| 7 ++-
>  src/PVE/LXC/Config.pm | 6 ++
>  2 files changed, 12 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC qemu-server] apt hook: warn against using 'upgrade' command

2024-09-09 Thread Thomas Lamprecht
Am 09/09/2024 um 09:52 schrieb Fiona Ebner:
> Am 09.09.24 um 09:48 schrieb Fabian Grünbichler:
>> given that I also use `apt upgrade` from time to time (habit from being an 
>> unstable user ;)), and that it might alienate power users coming from 
>> Debian, I'd prefer this to be a non-interactive warning with the text 
>> "disarmed" a bit?
>>
>> something like
>>
>> !! WARNING !!
>> Since Proxmox VE follows a rolling release model, using 'upgrade' can lead 
>> to a system being stuck on outdated versions, or in rare cases, break upon 
>> upgrading. Use 'dist-upgrade' or 'full-upgrade' instead.
>> !! WARNING !!

All right, albeit I find the "!! WARNING !!" still rather a bit to scary then.

Maybe just an extra newline above and below and then something like your 
wording:

NOTE: Proxmox VE follows a rolling release model, so using the 'upgrade' 
command [...]

FWIW, you could also make this more generic for easier re-use, like:

NOTE: Proxmox projects follow [...]

and ship it in a common package, maybe even an extra one that is only 
recommended,
not hard-dependency, so the power user can remove the warning if they don't care
(or are daring)


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH docs v2] docs: fix typos

2024-09-06 Thread Thomas Lamprecht
Am 25/07/2024 um 10:29 schrieb Maximiliano Sandoval:
> Signed-off-by: Maximiliano Sandoval 
> ---
> Differences from v1:
>  - Remove fixes in automatically generated files
> 
>  ha-manager.adoc|  2 +-
>  notifications.adoc | 12 ++--
>  pve-faq.adoc   |  2 +-
>  pve-firewall.adoc  |  2 +-
>  pve-network.adoc   |  4 ++--
>  pvescheduler.adoc  |  2 +-
>  pvesdn.adoc|  2 +-
>  pveum.adoc |  2 +-
>  qm.adoc|  2 +-
>  vzdump.adoc|  2 +-
>  10 files changed, 16 insertions(+), 16 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH novnc] fix #5639: prevent browser from trying to save novnc password

2024-09-06 Thread Thomas Lamprecht
Am 27/08/2024 um 15:15 schrieb Dominik Csapak:
> by not using the password input at all, but pass the password
> to the connect function manually
> 
> this changes the first patch instead of adding another one, since
> it only touches code from that.
> 
> Signed-off-by: Dominik Csapak 
> ---
>  .../0001-add-PVE-specific-JS-code.patch   | 37 +++
>  ...002-add-custom-fbresize-event-on-rfb.patch |  6 +--
>  ...nge-scaling-when-toggling-fullscreen.patch |  6 +--
>  ...rectory-for-fetching-images-js-files.patch |  4 +-
>  .../0011-add-localCursor-setting-to-rfb.patch |  6 +--
>  .../0012-pass-custom-command-to-vnc.patch |  2 +-
>  ...passing-deprecated-upgrade-parameter.patch |  2 +-
>  ...-create-own-class-for-hidden-buttons.patch |  2 +-
>  ...-button-on-isFullscreen-get-variable.patch |  2 +-
>  ...ow-start-button-on-not-running-vm-ct.patch |  4 +-
>  .../patches/0019-show-clipboard-button.patch  |  8 ++--
>  11 files changed, 33 insertions(+), 46 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] orphaned cfs lock when interrupting qm disk import

2024-09-06 Thread Thomas Lamprecht
Hi,

Am 26/08/2024 um 22:46 schrieb Joshua Huber:
> Say you've just kicked off a long-running "qm disk import ..." command and
> notice that an incorrect flag was specified. Ok, cancel the operation with
> control-C, fix the flag and re-execute the import command...
> 
> However, when using shared storage you'll (at least for the next 120
> seconds) bump up against an orphaned cfs lock directory. One could manually
> remove the directory from /etc/pve/priv/locks, but it seems a little risky.
> (and a bad habit to get into)
> 
> So this got me thinking... could we trap SIGINT and more gracefully fail
> the operation? It seems like this would allow the cfs_lock function to
> clean up after itself. This seems like it'd be a nice CLI QOL improvement.

Yeah, that sounds sensible, albeit I'd have to look more closely into the
code, because it might not be that trivial if we execute another command,
like e.g. qemu-img, that then controls the terminal and would receive
the sigint directly. There are options for that too, but not so nice.
Anyhow, as long as this all happens in a worker it probably should not
be an issue and one could just install a handler with
`$SIG{INT} = sub { ... cleanup ...; die "interrupted" };`
and be done.

> However, I'm not familiar with the history of the cfs-lock mechanism, why
> it's used for shared storage backends, and what other invalid PVE states
> might be avoided as a side-effect of serializing storage operations.
> Allowing concurrent operations could result in disk numbering collisions,
> but I'm not sure what else. (apart from storage-specific limitations.)

In general the shared lock is to avoid concurrent access of the same volume
but it's currently much coarser that it needs to be, i.e., it could be
on just the volume or at least the VMID for volumes that are owned by guests.
But that's a rather different topic.

Anyhow, once interrupted keeping the lock active won't protect us from
anything, especially as it will become free after 120s anyway, as you noticed
yourself, so removing that actively immediately should not cause any (more)
problems, FIWCT.

Are you willing to look into this? Otherwise, a bugzilla entry would be fine
too.

cheers
 Thomas


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH container] fix #5666: fix quota regression due to device passthrough

2024-09-06 Thread Thomas Lamprecht
Am 27/08/2024 um 13:46 schrieb Filip Schauer:
> This commit fixes a regression introduced by
> commit ce1976b85361 ("Add device passthrough")
> 
> Prior to the addition of device passthrough, the `lxc-pve-autodev-hook`
> would invoke `PVE::LXC::Tools::for_current_devices` only once. If the
> device list was empty, `exit 0` would be called and the
> `lxc-pve-autodev-hook` would exit.
> 
> However, with the new device passthrough logic, when no devices were
> passed through, the `exit` call would be encountered prematurely.
> This would prevent the subsequent iteration over passthrough mounts from
> occurring.
> 
> This commit resolves the issue by replacing the premature `exit` call
> with a `return` statement, ensuring the `lxc-pve-autodev-hook` continues
> executing and processes the passthrough mounts as expected.
> 
> Signed-off-by: Filip Schauer 
> ---
>  src/PVE/LXC/Tools.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH container] fix #5674: add missing 'proxyto' for LXC interfaces API

2024-09-06 Thread Thomas Lamprecht
Am 02/09/2024 um 14:25 schrieb Fabian Grünbichler:
> else this API endpoint would only work when connected to the node where the
> container is currently running.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
>  src/PVE/API2/LXC.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] base plugin: do not decode the empty string

2024-09-06 Thread Thomas Lamprecht
Am 02/09/2024 um 14:47 schrieb Maximiliano Sandoval:
> If the json was empty, for example if the qemu-img command times out, a
> message
> 
> warn "could not parse qemu-img info command output for '$filename' - 
> $err\n";
> 
> would have been printed.
> 
> This message could lead one to think the issue lies in the contents of
> the json, even if the previous warning said that there was a timeout.
> 
> Signed-off-by: Maximiliano Sandoval 
> ---
>  src/PVE/Storage/Plugin.pm | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 6444390f..8cc693c7 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -974,6 +974,10 @@ sub file_size_info {
>   # otherwise we warn about it and try to parse the json
>   warn $err_output;
>  }
> +if (!$json) {
> + # skip decoding if there was no output, e.g. if there was a timeout.
> + return wantarray ? (undef, undef, undef, undef, $st->ctime) : undef;
> +}
>  

FWIW, one could have avoided a tiny bit of duplication (trading against being
explicit) by changing the call to warn in the error path to something like:

warn "..." if !!$json;

but OTOH trying to parse json when one knows that it cannot work makes no sense
either, so your variant is probably better...

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH storage v4] fix #4272: btrfs: add rename feature

2024-09-06 Thread Thomas Lamprecht
Am 05/07/2024 um 15:10 schrieb Maximiliano Sandoval:
> Adds the ability to change the owner of a guest image.
> 
> Btrfs does not need special commands to rename a subvolume and this can
> be achieved the same as in Storage/plugin.pm's rename_volume taking
> special care of how the directory structure used by Btrfs.
> 
> Signed-off-by: Maximiliano Sandoval 
> ---
> Differences from v3:
>  - add qcow2 and vmdk support for rename
>  - remove unused $ppath variable
> 
> Differences from v2:
>  - use indices instead of assigning to undef 5 times
> 
> Differences from v1:
>  - avoid assigning unused values of returned list to variables
> 
>  src/PVE/Storage/BTRFSPlugin.pm | 33 +
>  1 file changed, 33 insertions(+)
> 
>

applied, with Aaron's T-b and R-b, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v3 qemu-server] remote migration: fix online migration via API clients

2024-09-06 Thread Thomas Lamprecht
Am 04/09/2024 um 13:12 schrieb Fiona Ebner:
> As reported in the community forum [0], when a remote migration
> request comes in via an API client, the -T flag for Perl is set, so an
> insecure dependency in a call like unlink() in forward_unix_socket()
> will fail with:
> 
>> failed to write forwarding command - Insecure dependency in unlink while 
>> running with -T switch
> 
> To fix it, untaint the problematic socket addresses coming from the
> remote side. Require that all sockets are below '/run/qemu-server/'
> and end with '.migrate' with the main socket being matched more
> strictly. This allows extensions in the future while still being quite
> strict.
> 
> [0]: https://forum.proxmox.com/threads/123048/post-691958
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Changes in v3:
> * Match main socket address more strictly as suggested by Fabian.
> 
>  PVE/QemuMigrate.pm | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH container/manager v2 0/2] add deny read/write options for device passthrough

2024-09-06 Thread Thomas Lamprecht
Am 06/09/2024 um 14:14 schrieb Fiona Ebner:
> Am 24.07.24 um 19:18 schrieb Filip Schauer:
>> Add the deny_read and deny_write options for device passthrough, to
>> restrict container access to devices.
>>
>> This allows for passing through a device in read-only mode without
>> giving the container full access it.
>>
>> Up until now a container with a device passed through to it was granted
>> full access to that device without an option to restrict that access as
>> pointed out by @Fiona.
>>
>> Changes since v1:
>> * set default values for deny_read & deny_write
>> * remove the deny_read checkbox from the UI, since it is expected to
>>   only have a very niche use case.
>>
> 
> We could also use dashes instead of underscores, i.e.
> "deny-read"/"deny-write" as we often do for new properties.
> 
> Still not fully sure we need deny_read in the backend until somebody
> complains with a sensible use case, but I guess it doesn't hurt if it's
> already there.

+1 to all above, I think going just with a deny-write option should
be enough for now, let's wait for an actual deny-read use-case before
adding it.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC qemu-server] apt hook: warn against using 'upgrade' command

2024-09-06 Thread Thomas Lamprecht
Am 06/09/2024 um 12:40 schrieb Fiona Ebner:
> Many people will use 'upgrade' instead of 'full-upgrade' or
> 'dist-upgrade' (e.g. [0][1]) despite the documentation explicitly
> mentioning 'dist-upgrade' [3]. Proxmox VE uses different packaging
> guarantees than Debian and using 'upgrade' can lead to a broken
> system [2].
> 
> The match is kept simple, to not accidentally catch things like
>> -o 'foo=bar upgrade baz'
> and trip up advanced users.
> 
> It does not catch invocations with '-y' either, making it less likely
> to break automated user scripts. Although they should not use
> 'upgrade' either, it still would be bad to break them. If the risk is
> still considered too high, this change should wait until a major or
> at least point release.
> 
> To avoid false positives, it would be necessary to properly parse
> options, which is likely not worth the effort.
> 
> A downside is that the hook is only invoked after the user confirms
> the upgrade, but there doesn't seem to be an early enough hook entry
> (DPkg::Pre-Invoke is also too late). Since this is just an additional
> safety warning to guide new users, it should still be good enough.
> 
> [0]: https://forum.proxmox.com/threads/150217/post-680158
> [1]: https://forum.proxmox.com/threads/140580/post-630419
> [2]: 
> https://www.reddit.com/r/Proxmox/comments/ujqig9/use_apt_distupgrade_or_the_gui_not_apt_upgrade/
> [3]: 
> https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#system_software_updates
> 

yeah, it's something I considered here and then but never pulled through,
as it just somehow doesn't feel right...

But it's definitively a real problem, and so I surely won't block this on
the basis of some gut feeling, I'd rather like to hear Fabian's opinion on
it.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC qemu-server] apt hook: warn against using 'upgrade' command

2024-09-06 Thread Thomas Lamprecht
Am 06/09/2024 um 14:16 schrieb Alexander Zeidler:
> On Fri Sep 6, 2024 at 12:40 PM CEST, Fiona Ebner wrote:
>> (DPkg::Pre-Invoke is also too late). Since this is just an additional
>> safety warning to guide new users, it should still be good enough.
> 
>> +  if ($line && $line =~ m/^CommandLine::AsString=apt(-get)?%20upgrade$/) {
>> +$log->("!! WARNING !!\n");
>> +$log->("Using 'upgrade' can violate packaging assumptions made by 
>> Proxmox VE. Use 'dist-upgrade' instead.\n");
>> +$log->("\n");
> 
> As "an additional safety warning to guide new users", we may want to
> rephrase this line:
>> +$log->("Press enter to continue, or C^c to abort.\n");
> to something like:
> "Press CTRL+C to abort, or ENTER to continue anyway."

FWIW, if we do this then I probably would keep enter spelled lower case,
simply to avoid it sticking so much out to user to trigger reflexes and
pressing enter without fully reading the message.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC qemu-server] apt hook: warn against using 'upgrade' command

2024-09-06 Thread Thomas Lamprecht
Am 06/09/2024 um 13:01 schrieb Dominik Csapak:
> just to mention it (i think it's not widely known):
> 
> 'apt upgrade' behaves slightly different than 'apt-get upgrade'
> 
> while the former also installs new packages if necessary, the latter
> doesn't, so an 'apt upgrade' will maybe work ok in most sitations.
> I guess we want to filter it out nonetheless?
> 
> (none of the commands above will remove packages)

that's all true, but I agree also with your guess, we rely on removals
to happen as we have semi-frequent transitions during a stable release
cycle, so while `apt upgrade` is thankfully way more sensible already,
it still isn't enough for our use case.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server] fix #5284: diallow moving vm disks to storages not meant for images

2024-09-06 Thread Thomas Lamprecht
Am 29/08/2024 um 16:26 schrieb Daniel Kral:
> Adds a check if the target storage of a VM disk move between two
> different storages supports the content type 'images'. Without the
> check, it will move the disk image to storages which do not support VM
> images, which causes the VM to fail at startup (for any non-unused
> disks).
> 
> Signed-off-by: Daniel Kral 
> ---
> There is a content type check for any used volume (e.g. 'scsi0', ...) at
> PVE/QemuServer.pm:3964 in the subroutine `config_to_command` which will
> (at least) make the VM fail to start unless the volumes are moved to a
> storage that supports images again.
> 
> Also, just as a note, moving the disk to storages that do not support
> the `vdisk_alloc` method in the storage plugin (like PBS) also
> rightfully fail before and after this change.
> 
>  PVE/API2/Qemu.pm | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index d25a79fe..b6ba9d21 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -4409,6 +4409,9 @@ __PACKAGE__->register_method({
>   );
>   } elsif ($storeid) {
>   $rpcenv->check($authuser, "/storage/$storeid", 
> ['Datastore.AllocateSpace']);
> + my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> + raise_param_exc({ storage => "storage '$storeid' does not support 
> vm images" })
> + if !$scfg->{content}->{images};

Hmm, code wise it looks OK, but not so sure if this is the best place
to check, I'd rather look into either moving this into the $load_and_check_move
closure or into the PVE::QemuServer::clone_disk method, avoiding such an issue
for all other call sites too, albeit one would need to evaluate those call sites
if it does not break an existing usecase when disallowing this everywhere.

btw. did you check if containers are also affected by this bug?

>  
>   $load_and_check_move->(); # early checks before forking/locking
>  



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH proxmox-offline-mirror] docs: fix command arguments for syncing a medium

2024-09-06 Thread Thomas Lamprecht
Am 26/08/2024 um 17:11 schrieb Daniel Kral:
> Fixes a minor error in the documentation about syncing a medium, where
> the id for a medium is falsely provided with a `--id ` argument.
> 
> Signed-off-by: Daniel Kral 
> ---
>  docs/offline-media.rst | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH storage/esxi-import-tools 0/2] fix #5587: compatibility for flatcased 'filename' property in vmx file

2024-09-06 Thread Thomas Lamprecht
Am 21/08/2024 um 15:57 schrieb Daniel Kral:
> Compatibility for an older naming convention of the "fileName" property
> of mounted storage device images in vmx configuration files was
> requested at [1].
> 
> Previously, it was only possible to import ESXi VMs, where the mentioned
> property name was camelcased (e.g. "scsi0:0.fileName"). This patch
> allows this property name to also be flatcased for compatibility with
> older vmx versions (e.g. "scsi0:0.filename").
> 
> ===
> 
> I could reproduce the issue by creating an ESXi VM in ESXi 8.0.2 with
> the dialog and _manually_ renaming the property name to "filename". This
> caused the disk to not show up in PVE's Import Guest wizard.
> 
> I could not reproduce the flatcased property name mentioned above by
> using the VMWare creation dialog alone, even when I tried to create a
> ESXi 4.x-compatible .vmx file (the oldest option available in VMvisor
> ESXi 8.0).

such information part can be fine to have in the commit message too, but
no biggie, especially as you referenced the bug report where the info is
present.

> 
> ===
> 
> I tested the patch on two different PVE nodes (1 patched & 1 unpatched):
> 
> 1. Creating two different ESXi VMs (Debian 6 and 12),
> 2. I imported them with the camelcased "fileName" successfully.
> 3. I changed the property name to "filename" in the vmx config files for
>both ESXi VMs and imported them on the patched PVE node successfully
>and could not import the disk image on the unpatched PVE node.
> 4. pve-storage passed all previous tests.
> 
> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=5587
> 
> storage:
> 
> Daniel Kral (1):
>   esxi: fix #5587: add support for older version of vmx storage

I only noticed it now, but we normally prefix the `fix #ID` part always
at the very start.

Albeit I'm pondering over if it would be better to move the reference to
a bug in the commit message, which avoids having to decide between placing
the subsystem tag first or the fix reference.. anyhow I digress.

> filepaths
> 
>  src/PVE/Storage/ESXiPlugin.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> esxi-import-tools:
> 
> Daniel Kral (1):
>   fix #5587: add support for older version of vmx storage filepaths
> 
>  src/vmx.rs | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 


applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH qemu 1/2] pick up fix for VirtIO PCI regressions

2024-09-06 Thread Thomas Lamprecht
Am 05/09/2024 um 11:49 schrieb Fiona Ebner:
> Commit f06b222 ("fixes for QEMU 9.0") included a revert for the QEMU
> commit 2ce6cff94d ("virtio-pci: fix use of a released vector"). That
> commit caused some regressions which sounded just as bad as the fix.
> Those regressions have now been addressed upstream, so pick up the fix
> and drop the revert. Dropping the revert fixes the original issue that
> commit 2ce6cff94d ("virtio-pci: fix use of a released vector")
> addressed.
> 
> Signed-off-by: Fiona Ebner 
> ---
>  ...tio-pci-fix-use-of-a-released-vector.patch | 87 ---
>  ...ck-copy-before-write-fix-permission.patch} |  0
>  ...-write-support-unligned-snapshot-di.patch} |  0
>  ...-write-create-block_copy-bitmap-in-.patch} |  0
>  ...backup-add-discard-source-parameter.patch} |  0
>  ...-de-initialization-of-vhost-user-de.patch} |  0
>  ...se-float_status-copy-in-sme_fmopa_s.patch} |  0
>  ...Use-FPST_F16-for-SME-FMOPA-widening.patch} |  0
>  ...on-and-honor-bootindex-again-for-le.patch} |  0
>  ...a-bump-instruction-limit-in-scripts.patch} |  0
>  ...5-block-copy-Fix-missing-graph-lock.patch} |  0
>  ...do-not-operate-on-sources-from-fina.patch} |  0
>  ...ix-the-use-of-an-uninitialized-irqfd.patch | 77 
>  debian/patches/series | 24 ++---
>  14 files changed, 89 insertions(+), 99 deletions(-)
>  delete mode 100644 
> debian/patches/extra/0006-Revert-virtio-pci-fix-use-of-a-released-vector.patch
>  rename 
> debian/patches/extra/{0007-block-copy-before-write-fix-permission.patch => 
> 0006-block-copy-before-write-fix-permission.patch} (100%)
>  rename 
> debian/patches/extra/{0008-block-copy-before-write-support-unligned-snapshot-di.patch
>  => 0007-block-copy-before-write-support-unligned-snapshot-di.patch} (100%)
>  rename 
> debian/patches/extra/{0009-block-copy-before-write-create-block_copy-bitmap-in-.patch
>  => 0008-block-copy-before-write-create-block_copy-bitmap-in-.patch} (100%)
>  rename 
> debian/patches/extra/{0010-qapi-blockdev-backup-add-discard-source-parameter.patch
>  => 0009-qapi-blockdev-backup-add-discard-source-parameter.patch} (100%)
>  rename 
> debian/patches/extra/{0011-hw-virtio-Fix-the-de-initialization-of-vhost-user-de.patch
>  => 0010-hw-virtio-Fix-the-de-initialization-of-vhost-user-de.patch} (100%)
>  rename 
> debian/patches/extra/{0012-target-arm-Use-float_status-copy-in-sme_fmopa_s.patch
>  => 0011-target-arm-Use-float_status-copy-in-sme_fmopa_s.patch} (100%)
>  rename 
> debian/patches/extra/{0013-target-arm-Use-FPST_F16-for-SME-FMOPA-widening.patch
>  => 0012-target-arm-Use-FPST_F16-for-SME-FMOPA-widening.patch} (100%)
>  rename 
> debian/patches/extra/{0014-scsi-fix-regression-and-honor-bootindex-again-for-le.patch
>  => 0013-scsi-fix-regression-and-honor-bootindex-again-for-le.patch} (100%)
>  rename 
> debian/patches/extra/{0015-hw-scsi-lsi53c895a-bump-instruction-limit-in-scripts.patch
>  => 0014-hw-scsi-lsi53c895a-bump-instruction-limit-in-scripts.patch} (100%)
>  rename debian/patches/extra/{0016-block-copy-Fix-missing-graph-lock.patch => 
> 0015-block-copy-Fix-missing-graph-lock.patch} (100%)
>  rename 
> debian/patches/extra/{0017-Revert-qemu-char-do-not-operate-on-sources-from-fina.patch
>  => 0016-Revert-qemu-char-do-not-operate-on-sources-from-fina.patch} (100%)
>  create mode 100644 
> debian/patches/extra/0017-virtio-pci-Fix-the-use-of-an-uninitialized-irqfd.patch
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH zfsonlinux 0/2] update ZFS to 2.2.6

2024-09-05 Thread Thomas Lamprecht
Am 05/09/2024 um 11:44 schrieb Stoiko Ivanov:
> This patchset updates ZFS to 2.2.6, which was released last night.
> It supersedes the update to 2.2.5:
> https://lore.proxmox.com/pve-devel/20240820164512.1532793-1-s.iva...@proxmox.com/
> 
> The usrmerge part of the above series will be resent independently later,
> as we probably don't want to do that in between releases (as talked
> off-list with Thomas).
> 
> In addition to the changes from 2.2.5, 2.2.6 contains a fixes discard on
> zvols, that might affect kernel 6.8:
> https://github.com/openzfs/zfs/pull/16462
> https://github.com/openzfs/zfs/pull/16454
> 
> additionally it contains a few relevant fixes for more recent kernels
> (6.10+)
> 
> It also pulls in a fix for bash-completion that was reported in our
> bugzilla: https://bugzilla.proxmox.com/show_bug.cgi?id=5565
> 
> Tested minimally on a pair of VMs (migrate a container, storage
> replication).
> 
> Relevant part of the cover-letter for the superseded series for ZFS-2.2.5:
> This patchset updates ZFS to 2.2.5 which was released two weeks ago [0]
> The changes don't look too scary, but could prevent a few issues in not so
> common use-cases for us e.g.:
> https://github.com/openzfs/zfs/pull/16359/commits/4d2f7f9839d12708417457cd57cf43d15cae5e92
> https://github.com/openzfs/zfs/pull/16359/commits/6f27c4cadd29eb9b850c1c66bf71ef9ba119b955
> https://github.com/openzfs/zfs/pull/16359/commits/ef08cb26dae6b3c2e930e66852f13329babb7c2e
> 
> [0] https://github.com/openzfs/zfs/pull/16359
> 
> Stoiko Ivanov (2):
>   update zfs submodule to 2.2.6
>   debian: remove libzfsbootenv1linux.install
> 
>  debian/libzfsbootenv1linux.install | 1 -
>  upstream   | 2 +-
>  2 files changed, 1 insertion(+), 2 deletions(-)
>  delete mode 100644 debian/libzfsbootenv1linux.install
> 


applied series, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH kernel] add fix for regression causing broken power indicators/LEDs

2024-09-05 Thread Thomas Lamprecht
Am 26/07/2024 um 11:06 schrieb Fiona Ebner:
> The issue was reported in the enterprise support. The customer
> contacted the ledmon maintainer, who found that it is not an issue
> with ledmon, bisected the kernel and came up with this fix.
> 
> Signed-off-by: Fiona Ebner 
> ---
>  ...n-Power-Indicator-bits-for-userspace.patch | 56 +++
>  1 file changed, 56 insertions(+)
>  create mode 100644 
> patches/kernel/0021-PCI-pciehp-Retain-Power-Indicator-bits-for-userspace.patch
> 
>

applied a cherry-pick of this, as the patch numbers recently changed.

But I took over your commit message and referenced you through an
Orginally-by trailer, I hope that was alright, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH kernel 1/2] cherry-pick fix for NULL pointer dereference in apparmorfs

2024-08-05 Thread Thomas Lamprecht
On 05/08/2024 12:31, Fiona Ebner wrote:
> Reported in the community forum:
> https://forum.proxmox.com/threads/145760/post-690328
> 
> Signed-off-by: Fiona Ebner 
> ---
>  ...ix-possible-NULL-pointer-dereference.patch | 101 ++
>  1 file changed, 101 insertions(+)
>  create mode 100644 
> patches/kernel/0023-apparmor-fix-possible-NULL-pointer-dereference.patch
> 
>

applied, thanks!

> Saw it only after already sending the patches, but in bug #5103 a user
> reports that reverting, AFAICT commit 565736048bd5 ("ixgbe: Manual AN-37
> for troublesome link partners for X550 SFI"), fixes that issue. That is
> also part of the Ubuntu-6.8.0-43.43 tag as eb5551d4a4b7 ("Revert "ixgbe:
> Manual AN-37 for troublesome link partners for X550 SFI"") and would
> also be worth picking up.
> 

Ack, I will update to the latest ubuntu tag for the next build.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[ceph-users] Re: Debian package for 18.2.4 broken

2024-08-01 Thread Thomas Lamprecht
Hi,

On 01/08/2024 19:49, Burkhard Linke wrote:
> the Debian bookworm packages for 18.2.4 are completely broken and unusable:
> 
> 1. missing dependencies, e.g. python3-packaging:

This is https://tracker.ceph.com/issues/67290 and I submitted a PR:
https://github.com/ceph/ceph/pull/58956
 
> 2. After installing missing dependencies activating encrypted OSDs fails:
> 
> root@cc-r3-ceph-2:~# ceph-volume lvm activate --all
> --> Activating OSD ID 875 FSID 06296c01-4c59-4fde-afe4-0e1e4f078ba9
> -->  InvalidVersion: Invalid version: 'cryptsetup 2.6.1 flags: UDEV BLKID 
> KEYRING KERNEL_CAPI '

this is a missing backport of:
https://github.com/ceph/ceph/commit/dc28b77a6ea50b3390663ac02eeb80367650b7ed
albeit the respective issue hasa backport request open since two months:
https://tracker.ceph.com/issues/66393
not sure why it was not completed..

Plus a follow-up of the above missing commit:
https://github.com/ceph/ceph/commit/607eb34b2c278566c386efcbf3018629cf08ccfd
https://tracker.ceph.com/issues/66393
That is also pending a backport since two months.

FWIW, a version with those things fixed and actually tested is available on
our "test" ceph reef repo:
https://pve.proxmox.com/wiki/Package_Repositories#_ceph_reef_test_repository

See here for the release key used to sign those packages:
https://pve.proxmox.com/wiki/Package_Repositories#repos_secure_apt

regards
 Thomas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[pve-devel] applied: [PATCH manager v2] ui: dc summary: fix calculation of storage size

2024-07-31 Thread Thomas Lamprecht
Am 31/07/2024 um 14:14 schrieb Igor Thaller:
> The issue is related to the 'Summary' tab under 'Datacenter' inside a
> cluster. To get a steady reading of the storage size data, the frontend
> requests the '/api2/json/cluster/resources' every three seconds to
> retrieve the necessary data to calculate the used and total storage
> size.
> 
> The problem occurs when a shared storage is defined and a node goes
> offline. As the node is not online, it cannot report the shared storage
> size (both used and total) back to the other nodes. The order of the
> JSON response is not always the same, so it is possible that the offline
> node will appear first. Consequently, the frontend will display the
> wrong total and used storage. This is because the shared storage data
> has both the maximum disk size and the used disk set to zero when the
> node is offline. This causes the total and used space data to be
> calculated and displayed incorrectly, leading to fluctuations in the
> displayed percentage of used disk space.
> 
> To fix this, add a conditional check to skip the storage report if its
> status is 'unknown' (regardless of if the storage is local or shared).
> This prevents the unreliable data from being processed.
> 
> Reported-by: Friedrich Weber 
> Signed-off-by: Igor Thaller 
> ---
> 
> Notes:
> Changes from v1 -> v2
> * Ignore all storages of status unknown instead of ignoring just shared
>   storages with status unknown (thanks Fiona)
> * Move the testing comments to the notes (thanks Fiona)
> * Reword sentence describing the problem
> 
> To test these changes, adjust the 'max_requests' variable in the Perl
> script located at '/usr/share/perl5/PVE/Service/pveproxy.pm' to increase
> the likelihood of the error to occur. This makes the storage size
> fluctuations more frequent. Then compare the storage results (both used
> and total sizes) before and after implementing the fix.
> 
> Note: Be aware that it takes around one minute for the spike to happen.
> 
>  www/manager6/dc/Summary.js | 5 +
>  1 file changed, 5 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [WIP qemu-server 1/3] fix #2255: add support for nvme emulation

2024-07-31 Thread Thomas Lamprecht
Am 31/07/2024 um 15:40 schrieb Aaron Lauterer:
> From: Oguz Bektas 
> 
> now we can add nvme drives;
> 
> nvme0: local-lvm:vm-103-disk-0,size=32G
> 
> or
> 
> qm set VMID --nvme0 local-lvm:32
> 
> max number is 8 for now, as most real hardware has 1-3 nvme slots and
> can have a few more with pcie. most cases won't go over 8, if there's an
> actual usecase at some point this can always be increased without
> breaking anything (although the same isn't valid for decreasing it).
> 
> Signed-off-by: Oguz Bektas 

A S-o-b from you hear would be good to have, you can also describe the
changes done on top of original patch like:

Signed-off-by: Oguz Bektas 
 [ AL: rebased, ... ]
Signed-off-by: Aaron Lauterer 


> ---
>  PVE/QemuServer.pm   | 23 ---
>  PVE/QemuServer/Drive.pm | 21 +
>  2 files changed, 41 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index b26da50..7d8d75b 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -437,7 +437,7 @@ EODESC
>   optional => 1,
>   type => 'string', format => 'pve-qm-bootdisk',
>   description => "Enable booting from specified disk. Deprecated: Use 
> 'boot: order=foo;bar' instead.",
> - pattern => '(ide|sata|scsi|virtio)\d+',
> + pattern => '(ide|sata|scsi|virtio|nvme)\d+',
>  },
>  smp => {
>   optional => 1,
> @@ -1433,7 +1433,10 @@ sub print_drivedevice_full {
>   $device .= ",product=$product";
>   }
>   }
> -
> +} elsif ($drive->{interface} eq 'nvme') {
> + my $path = $drive->{file};

^- this variable seems to be unused? 

> + $drive->{serial} //= "$drive->{interface}$drive->{index}"; # serial is 
> mandatory for nvme
> + $device = 
> "nvme,drive=drive-$drive->{interface}$drive->{index},id=$drive->{interface}$drive->{index}";
>  } elsif ($drive->{interface} eq 'ide' || $drive->{interface} eq 'sata') {
>   my $maxdev = ($drive->{interface} eq 'sata') ? 
> $PVE::QemuServer::Drive::MAX_SATA_DISKS : 2;
>   my $controller = int($drive->{index} / $maxdev);
> @@ -2396,7 +2399,7 @@ sub parse_vm_config {
>   } else {
>   $key = 'ide2' if $key eq 'cdrom';
>   my $fmt = $confdesc->{$key}->{format};
> - if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata)$/) {
> + if ($fmt && $fmt =~ /^pve-qm-(?:ide|scsi|virtio|sata|nvme)$/) {
>   my $v = parse_drive($key, $value);
>   if (my $volid = filename_to_volume_id($vmid, $v->{file}, 
> $v->{media})) {
>   $v->{file} = $volid;
> @@ -4289,6 +4292,17 @@ sub vm_deviceplug {
>   warn $@ if $@;
>   die $err;
>  }
> +} elsif ($deviceid =~ m/^(nvme)(\d+)$/) {
> +
> + qemu_driveadd($storecfg, $vmid, $device);
> +
> + my $devicefull = print_drivedevice_full($storecfg, $conf, $vmid, 
> $device, $arch, $machine_type);

Would prefer having a word separator: my $device_full = ...

> + eval { qemu_deviceadd($vmid, $devicefull); };
> + if (my $err = $@) {
> + eval { qemu_drivedel($vmid, $deviceid); };
> + warn $@ if $@;
> + die $err;
> +}
>  } elsif ($deviceid =~ m/^(net)(\d+)$/) {
>   return if !qemu_netdevadd($vmid, $conf, $arch, $device, $deviceid);
>  
> @@ -4361,6 +4375,9 @@ sub vm_deviceunplug {
>  
>   qemu_iothread_del($vmid, "virtioscsi$device->{index}", $device)
>   if $conf->{scsihw} && ($conf->{scsihw} eq 'virtio-scsi-single');
> +} elsif ($deviceid =~ m/^(nvme)(\d+)$/) {
> + qemu_devicedel($vmid, $deviceid);
> + qemu_drivedel($vmid, $deviceid);
>  } elsif ($deviceid =~ m/^(net)(\d+)$/) {
>   qemu_devicedel($vmid, $deviceid);
>   qemu_devicedelverify($vmid, $deviceid);
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index 6e98c09..f05ad26 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -33,6 +33,7 @@ 
> PVE::JSONSchema::register_standard_option('pve-qm-image-format', {
>  
>  my $MAX_IDE_DISKS = 4;
>  my $MAX_SCSI_DISKS = 31;
> +my $MAX_NVME_DISKS = 8;
>  my $MAX_VIRTIO_DISKS = 16;
>  our $MAX_SATA_DISKS = 6;
>  our $MAX_UNUSED_DISKS = 256;
> @@ -315,6 +316,20 @@ my $scsidesc = {
>  };
>  PVE::JSONSchema::register_standard_option("pve-qm-scsi", $scsidesc);
>  
> +my $nvme_fmt = {
> +%drivedesc_base,
> +%ssd_fmt,
> +%wwn_fmt,
> +};
> +
> +my $nvmedesc = {

The existing `foodesc` use is ugly, not sure how much I'd care about style
compat here though, so IMO it would be slightly nicer to use:
my $nvme_desc = ...

> +optional => 1,
> +type => 'string', format => $nvme_fmt,
> +description => "Use volume as NVME disk (n is 0 to " . ($MAX_NVME_DISKS 
> -1) . ").",
> +};
> +
> +PVE::JSONSchema::register_standard_option("pve-qm-nvme", $nvmedesc);
> +
>  my $sata_fmt = {
>  %drivedesc_base,
>  %ssd_fmt,
> @@ -515,6 +530,11 @@ for (my $i = 0; $i < $MAX_SCSI_DISKS; $i++)  {

[pve-devel] applied-series: [PATCH v3 qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk

2024-07-30 Thread Thomas Lamprecht
Am 04/07/2024 um 11:32 schrieb Fiona Ebner:
> There is a possibility that the drive-mirror job is not yet done when
> the migration wants to inactivate the source's blockdrives:
> 
>> bdrv_co_write_req_prepare: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' 
>> failed.
> 
> This can be prevented by using the 'write-blocking' copy mode (also
> called active mode) for the mirror. However, with active mode, the
> guest write speed is limited by the synchronous writes to the mirror
> target. For this reason, a way to start out in the faster 'background'
> mode and later switch to active mode was introduced in QEMU 8.2.
> 
> The switch is done once the mirror job for all drives is ready to be
> completed to reduce the time spent where guest IO is limited.
> 
> The loop waiting for actively-synced to become true is not an endless
> loop: Once the remaining dirty parts have been mirrored by the
> background iteration, the actively-synced flag will be set. Because
> the 'block-job-change' QMP command already succeeded, new writes will
> be done synchronously to the target and thus not lead to new dirty
> parts. If the job fails or vanishes (shouldn't actually happen,
> because auto-dismiss is false), the loop will be exited and the error
> propagated.
> 
> Reported rarely, but steadily over the years:
> https://forum.proxmox.com/threads/78954/post-353651
> https://forum.proxmox.com/threads/78954/post-380015
> https://forum.proxmox.com/threads/100020/post-431660
> https://forum.proxmox.com/threads/111831/post-482425
> https://forum.proxmox.com/threads/111831/post-499807
> https://forum.proxmox.com/threads/137849/
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Changes in v3:
> * avoid endless loop when job fails while switching to active mode
> * mention rationale why loop is not and endless loop in commit
>   message
> 
>  PVE/QemuMigrate.pm|  8 +
>  PVE/QemuServer.pm | 51 +++
>  test/MigrationTest/QemuMigrateMock.pm |  6 
>  3 files changed, 65 insertions(+)
> 
>

applied both patches, thanks!

Albeit I'm a bit wondering if we would be able to mock at a deeper level than, 
e.g.,
qemu_drive_mirror_switch_to_active_mode, maybe by mocking mon_cmd and then also
checking if they monitor commands are triggered as expected and in the correct 
order.
But that might affect a few more methods and definitively orthogonal to this.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] drive mirror: prevent wrongly logging success when completion fails differently

2024-07-30 Thread Thomas Lamprecht
Am 23/07/2024 um 14:07 schrieb Fiona Ebner:
> Currently, when completing a drive mirror job, only errors matching
> "cannot be completed" will be handled. Other errors are ignored and
> a wrong message that the job was completed successfully will be
> printed to the log. An instance of this popped up in the community
> forum [0].
> 
> The QMP command used for completing the job is either
> 'block-job-complete' or 'block-job-cancel'. The former causes the VM
> to switch to the target drive, the latter doesn't, e.g. migration uses
> the latter to not switch the source instance over to the target drive.
> The 'block-job-cancel' command doesn't even have the same "cannot be
> completed" message, but returns immediately.
> 
> The timeout for both 'block-job-cancel' and 'block-job-complete' is
> set to 10 minutes in the QMPClient module, which should be enough.
> 
> [0]: https://forum.proxmox.com/threads/151518/
> 
> Signed-off-by: Fiona Ebner 
> ---
>  PVE/QemuServer.pm | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v3 docs] cloudinit: add Windows cloudbase-init section

2024-07-30 Thread Thomas Lamprecht
Am 30/07/2024 um 17:15 schrieb Mira Limbeck:
> Signed-off-by: Mira Limbeck 
> ---
> v3:
>  - fixed list continuity/indentation
> v2:
>  - added metadata_services config option
>  - added Sysprep section
>  - fixed typos and clarified some parts
> 
>  qm-cloud-init.adoc | 154 +
>  1 file changed, 154 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v3 qemu-server] fix 4493: cloud-init: fix generated Windows config

2024-07-30 Thread Thomas Lamprecht
Am 30/07/2024 um 17:15 schrieb Mira Limbeck:
> Cloudbase-Init, a cloud-init reimplementation for Windows, supports only
> a subset of the configuration options of cloud-init. Some features
> depend on support by the Metadata Service (ConfigDrive2 here) and have
> further limitations [0].
> 
> To support a basic setup the following changes were made:
>  - password is saved as plaintext for any Windows guests (ostype)
>  - DNS servers are added to each of the interfaces
>  - SSH public keys are passed via metadata
> 
> Network and metadata generation for Cloudbase-Init is separate from the
> default ConfigDrive2 one so as to not interfere with any other OSes that
> depend on the current ConfigDrive2 implementation.
> 
> DNS search domains were removed because Cloudbase-Init's ENI parser
> doesn't handle it at all.
> The password set via `cipassword` is used for the Admin user configured
> in the cloudbase-init.conf in the guest while the `ciuser` parameter is
> ignored. The Admin user has to be set in the cloudbase-init.conf file
> instead.
> Specifying a different user does not work.
> 
> For the password to work the `ostype` needs to be any Windows variant
> before `cipassword` is set. Otherwise the password will be encrypted and
> the encrypted password used as plaintext password in the guest.
> 
> The `citype` needs to be `configdrive2`, which is the default for
> Windows guests, for the generated configs to be compatible with
> Cloudbase-Init.
> 
> [0] https://cloudbase-init.readthedocs.io/en/latest/index.html
> 
> Signed-off-by: Mira Limbeck 
> ---
> v3:
>  - removed `use URI` since we already `use URI::Escape`
>  - sent a separate patch adding `liburi-perl` dependency in d/control
> v2:
>  - unchanged
> 
>  PVE/API2/Qemu.pm| 13 ++---
>  PVE/QemuServer/Cloudinit.pm | 99 +++--
>  2 files changed, 101 insertions(+), 11 deletions(-)
> 
>

applied series, thanks!

Some tests would be nice for this CI stuff in general though, e.g. taking
in CI properties and mocking the write/apply parts to test if the resulting
output matches our expectation could already be a simple regression test
providing some basic safety net.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-manager] sdn: vnets: Hide irrelevant fields depending on zone type

2024-07-30 Thread Thomas Lamprecht
Am 22/12/2023 um 11:43 schrieb Stefan Hanreich:
> Not all fields in the VnetEdit dialog are necessary for every zone
> type. This lead to confusion for some users. Hide fields in the
> VNetEdit dialog depending on which kind of zone is selected in order
> to prevent potential confusion.
> 
> Signed-off-by: Stefan Hanreich 
> ---
>  www/manager6/form/SDNZoneSelector.js |  2 +-
>  www/manager6/sdn/VnetEdit.js | 40 
>  2 files changed, 41 insertions(+), 1 deletion(-)
> 
>

applied, thanks!

albeit, it might be slightly nicer to also disable the field when it's not
relevant, as then any validation handling cannot make the form invalid without
the user seeing why nor being able to fix it.

As you clear the value to a safe one for this specific fields, it should be
not cause any issues here though.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-manager] www: backup: clarify experimental change detection modes

2024-07-30 Thread Thomas Lamprecht
Am 26/06/2024 um 09:00 schrieb Christian Ebner:
> Currently, the whole mode selector is labeled as experimental, this
> does however give the impression that also the default legacy mode is
> an experimental mode.
> To clarify that only the `data` and `metadata` change detection modes
> are experimental, move the experimental label to the individual
> modes and explicitly mention the experimental modes in the message.
> 
> Also, make it more clear that the archive encoding format depends on
> the selected mode.
> 
> Signed-off-by: Christian Ebner 
> ---
>  www/manager6/panel/BackupAdvancedOptions.js | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
>

Just for the record: This was already applied by Fabian [0], thanks!

[0]: 
https://git.proxmox.com/?p=pve-manager.git;a=commit;h=8504a0e29132e01101a28f9dafcc930fd6cab95d


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] d/control: add liburi-perl dependency

2024-07-30 Thread Thomas Lamprecht
Am 30/07/2024 um 16:27 schrieb Mira Limbeck:
> URI is used in multiple files:
> PVE/API2/Qemu.pm
> PVE/CLI/qm.pm
> PVE/QemuServer.pm
> PVE/QemuServer/Cloudinit.pm
> 
> Dependencies of qemu-server already have it as dependency, but there's
> no explicit dependency in qemu-server yet.
> 
> Signed-off-by: Mira Limbeck 
> ---
>  debian/control | 2 ++
>  1 file changed, 2 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 qemu-server] fix 4493: cloud-init: fix generated Windows config

2024-07-29 Thread Thomas Lamprecht
Am 29/07/2024 um 17:19 schrieb Mira Limbeck:
> cloudbase-init, a cloud-init reimplementation for Windows, supports only
> a subset of the configuration options of cloud-init. Some features
> depend on support by the Metadata Service (ConfigDrive2 here) and have
> further limitations [0].
> 
> To support a basic setup the following changes were made:
>  - password is saved as plaintext for any Windows guests (ostype)
>  - DNS servers are added to each of the interfaces
>  - SSH public keys are passed via metadata
> 
> Network and metadata generation for cloudbase-init is separate from the
> default ConfigDrive2 one so as to not interfere with any other OSes that
> depend on the current ConfigDrive2 implementation.
> 
> [0] https://cloudbase-init.readthedocs.io/en/latest/index.html
> 
> Signed-off-by: Mira Limbeck 
> ---
> v2:
>  - unchanged
> 
> v1:
> 
> DNS search domains are not handled at all by the cloudbase-init ENI
> parser.
> The password is used for the Admin user specified in the
> cloudbase-init.conf inside the guest. Specifying a different user does
> not work. This would require rewriting the userdata handling as
> described in #5384 [1] which is a breaking change. Userdata generation
> is currently shared between all implementations, but the new one could
> be made cloudbase-init only for now.
> 
> To know if the password needs to be unencrypted, we have to check the
> `ostype`. For this we need access to the config. That's why I moved the
> `cipassword` handling inside $updatefn.
> 
> When no `citype` is specified, it will default to `configdrive2` on
> Windows. The check requires `ostype` to be set correctly.
> Any other `citype`s may not work correctly if used for cloudbase-init.
> 
> The docs patch adds a section on how to configure a Windows guest for
> cloudbase-init.

certainly not a clear-cut, but IMO most above might be even fine for the
commit message as (advanced) background and hints for future dev/support
engineers debugging CI on windows.

 
> diff --git a/PVE/QemuServer/Cloudinit.pm b/PVE/QemuServer/Cloudinit.pm
> index abc6b14..d4ecfac 100644
> --- a/PVE/QemuServer/Cloudinit.pm
> +++ b/PVE/QemuServer/Cloudinit.pm
> @@ -8,6 +8,8 @@ use Digest::SHA;
>  use URI::Escape;
>  use MIME::Base64 qw(encode_base64);
>  use Storable qw(dclone);
> +use JSON;
> +use URI;

Using that module would need adding `liburi-perl` as dependency, and I did not 
check
closely but quite probably also as build-dependency.

I can fleece that in on applying though, but I'd like some T-b tag (and would 
naturally
appreciate any R-b one) before applying this one.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 qemu-server] resume: bump timeout for query-status

2024-07-29 Thread Thomas Lamprecht
Am 25/07/2024 um 14:32 schrieb Fiona Ebner:
> As reported in the community forum [0], after migration, the VM might
> not immediately be able to respond to QMP commands, which means the VM
> could fail to resume and stay in paused state on the target.
> 
> The reason is that activating the block drives in QEMU can take a bit
> of time. For example, it might be necessary to invalidate the caches
> (where for raw devices a flush might be needed) and the request
> alignment and size of the block device needs to be queried.
> 
> In [0], an external Ceph cluster with krbd is used, and the initial
> read to the block device after migration, for probing the request
> alignment, takes a bit over 10 seconds[1]. Use 60 seconds as the new
> timeout to be on the safe side for the future.
> 
> All callers are inside workers or via the 'qm' CLI command, so bumping
> beyond 30 seconds is fine.
> 
> [0]: https://forum.proxmox.com/threads/149610/
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Changes in v2:
> * improve commit message with new findings from the forum thread
> 
>  PVE/QemuServer.pm | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu 1/2] update submodule and patches to QEMU 9.0.2

2024-07-29 Thread Thomas Lamprecht
Am 25/07/2024 um 11:45 schrieb Fiona Ebner:
> Most relevant are some fixes for VirtIO and for ARM and i386
> emulation. There also is a fix for VGA display to fix screen blanking,
> which fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=4786
> 
> Signed-off-by: Fiona Ebner 
> ---
>  ...d-support-for-sync-bitmap-mode-never.patch |  10 +-
>  ...race-with-clients-disconnecting-earl.patch |   4 +-
>  ...io-pci-fix-use-of-a-released-vector.patch} |   8 +-
>  .../0006-virtio-gpu-fix-v2-migration.patch|  98 ---
>  ...0007-hw-pflash-fix-block-write-start.patch |  59 -
>  ...operand-size-for-DATA16-REX.W-POPCNT.patch |  51 
>  ...ru-wrpkru-are-no-prefix-instructions.patch |  40 ---
>  ...6-fix-feature-dependency-for-WAITPKG.patch |  33 ---
>  ...move-compatibility-flags-for-VirtIO-.patch |  57 -
>  ...t-monitor-use-aio_co_reschedule_self.patch |  53 
>  ...ict-translation-disabled-alignment-c.patch |  51 
>  ...-IRQs-a-chance-when-resetting-HF_INH.patch |  80 --
>  ...r-v-Correct-kvm_hv_handle_exit-retur.patch |  60 -
>  ...86-disable-jmp_opt-if-EFLAGS.RF-is-1.patch |  31 ---
>  ...ingle-step-exception-after-MOV-or-PO.patch |  30 ---
>  ...n-t-open-data_file-with-BDRV_O_NO_IO.patch | 107 
>  ...names-only-when-explicitly-requested.patch | 241 --
>  ...le-posix-make-locking-optiono-on-cre.patch |   6 +-
>  ...ckup-Proxmox-backup-patches-for-QEMU.patch |   2 +-
>  ...k-driver-to-map-backup-archives-into.patch |   8 +-
>  ...igrate-dirty-bitmap-state-via-savevm.patch |   2 +-
>  ...-backup-add-discard-source-parameter.patch |   2 +-
>  ...e-allow-specifying-minimum-cluster-s.patch |   4 +-
>  ...um-cluster-size-to-performance-optio.patch |   2 +-
>  .../0050-PVE-backup-add-fleecing-option.patch |   2 +-
>  debian/patches/series |  16 +-
>  26 files changed, 26 insertions(+), 1031 deletions(-)
>  rename 
> debian/patches/extra/{0011-Revert-virtio-pci-fix-use-of-a-released-vector.patch
>  => 0006-Revert-virtio-pci-fix-use-of-a-released-vector.patch} (93%)
>  delete mode 100644 
> debian/patches/extra/0006-virtio-gpu-fix-v2-migration.patch
>  delete mode 100644 
> debian/patches/extra/0007-hw-pflash-fix-block-write-start.patch
>  delete mode 100644 
> debian/patches/extra/0008-target-i386-fix-operand-size-for-DATA16-REX.W-POPCNT.patch
>  delete mode 100644 
> debian/patches/extra/0009-target-i386-rdpkru-wrpkru-are-no-prefix-instructions.patch
>  delete mode 100644 
> debian/patches/extra/0010-target-i386-fix-feature-dependency-for-WAITPKG.patch
>  delete mode 100644 
> debian/patches/extra/0012-hw-core-machine-move-compatibility-flags-for-VirtIO-.patch
>  delete mode 100644 
> debian/patches/extra/0013-Revert-monitor-use-aio_co_reschedule_self.patch
>  delete mode 100644 
> debian/patches/extra/0014-target-arm-Restrict-translation-disabled-alignment-c.patch
>  delete mode 100644 
> debian/patches/extra/0015-target-i386-Give-IRQs-a-chance-when-resetting-HF_INH.patch
>  delete mode 100644 
> debian/patches/extra/0016-target-i386-hyper-v-Correct-kvm_hv_handle_exit-retur.patch
>  delete mode 100644 
> debian/patches/extra/0017-target-i386-disable-jmp_opt-if-EFLAGS.RF-is-1.patch
>  delete mode 100644 
> debian/patches/extra/0018-target-i386-no-single-step-exception-after-MOV-or-PO.patch
>  delete mode 100644 
> debian/patches/extra/0019-qcow2-Don-t-open-data_file-with-BDRV_O_NO_IO.patch
>  delete mode 100644 
> debian/patches/extra/0020-block-Parse-filenames-only-when-explicitly-requested.patch
> 
>

applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage] api: upload: correctly test for result of unlink

2024-07-29 Thread Thomas Lamprecht
Am 29/07/2024 um 16:29 schrieb Fiona Ebner:
> It's not enough to check whether $! is set. From "perldoc perlvar":
> 
>> Many system or library calls set "errno" if they fail, to
>> indicate the cause of failure. They usually do not set "errno"
>> to zero if they succeed and may set "errno" to a non-zero value
>> on success. This means "errno", hence $!, is meaningful only
>> *immediately* after a failure:
> 
> To protect against potential issues, check the return value of unlink
> and only check $! if it failed.

fine by me, but out of interest: any reason for why this is done now,
i.e., did this really cause trouble or is this done out of precaution?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v2] ui: pool: separate "Add Virtual Machine" menu into distinct options

2024-07-29 Thread Thomas Lamprecht
Am 29/07/2024 um 13:43 schrieb Theodor Fumics:
> Split the "Add Virtual Machine" menu into separate options
> for Virtual Machines and Containers to reduce confusion.
> This change follows feedback from a user in [1], who had difficulty
> finding the container option.
> 
> [1] 
> https://forum.proxmox.com/threads/how-to-add-containers-to-a-resource-pool.151946/
> 
> Signed-off-by: Theodor Fumics 
> ---
> 
> Notes:
> Changes from v1 -> v2
> * adjusted line to fit within 100 characters as per style guide

true, but that was a side effect of changing the logical expression, which
might be the more important one to highlight.

> 
>  www/manager6/grid/PoolMembers.js | 25 +++--
>  1 file changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/www/manager6/grid/PoolMembers.js 
> b/www/manager6/grid/PoolMembers.js
> index 75f20cab..78ccc2a4 100644
> --- a/www/manager6/grid/PoolMembers.js
> +++ b/www/manager6/grid/PoolMembers.js
> @@ -1,4 +1,4 @@
> -Ext.define('PVE.pool.AddVM', {
> +Ext.define('PVE.pool.AddGuest', {
>  extend: 'Proxmox.window.Edit',
> 
>  width: 640,
> @@ -37,7 +37,7 @@ Ext.define('PVE.pool.AddVM', {
>   ],
>   filters: [
>   function(item) {
> - return (item.data.type === 'lxc' || item.data.type === 
> 'qemu') &&item.data.pool !== me.pool;
> + return (me.type === item.data.type) && item.data.pool !== 
> me.pool;

nit: the parenthesis are not needed and one might wonder why they are added
only to the left comparison but not the right one.

Either use parenthesis for both expression-parts for consistency or, and that
would be IMO here better as it's quite a simple expression, use them for neither
part.

>   },
>   ],
>   });
> @@ -84,15 +84,11 @@ Ext.define('PVE.pool.AddVM', {
>   dataIndex: 'name',
>   flex: 1,
>   },
> - {
> - header: gettext('Type'),
> - dataIndex: 'type',
> - },
>   ],
>   });
> 
>   Ext.apply(me, {
> - subject: gettext('Virtual Machine'),
> + subject: gettext(me.type === 'qemu' ? 'Virtual Machine' : 'LXC 
> Container'),
>   items: [
>   vmsField,
>   vmGrid,
> @@ -228,16 +224,25 @@ Ext.define('PVE.grid.PoolMembers', {
>   items: [
>   {
>   text: gettext('Virtual Machine'),

Alternatively we could just rename the label to something that includes both,
i.e. "Virtual Guest" or just "Guest", which is the wording we normally use
when talking about both, VMs and CTs.

Did you think about that and explicitly chose against doing so?
If, it would be great to mention that in the commit message, having some
argumentation there about such things can help to reduce back and forth
and help when looking at old history to find out why things are they way
they are.

There are some advantages for your approach, like slightly better 
discoverability,
but also some for keep using a combined add dialogue, like making it faster to 
add
a set of guests that includes both CTs and VMs to a pool, or do not having to 
look
up what if a host was set up as CT or VM, and last but not least, a bit less 
code.

I did not check this to closely to call the shots now, but maybe take a step
back and look at alternative(s) and argue whatever decision you make (this 
approach,
the rename one, or something completely different).

> - iconCls: 'pve-itype-icon-qemu',
> + iconCls: 'fa fa-fw fa-desktop',
> + handler: function() {
> + var win = Ext.create('PVE.pool.AddGuest', { 
> pool: me.pool, type: 'qemu' });
> + win.on('destroy', reload);
> + win.show();

A nit and not directly related, but we use for modern windows the following 
style:

Ext.create('PVE.pool.AddGuest', {
autoShow: true,
pool: me.pool,
type: 'qemu',
listeners: {
listeners: {
destroy: reload, // or skip the trivial reload closure and use just 
`() => store.reload(),`
},
},
});


if you touch this many lines this clean-up could be folded in, but one also 
could
do it with some other clean-ups upfront, both can be fine.
As you're not too experienced with ExtJS it's totally fine to me to leave it as 
is,
though, the clean-ups here won't gain us that much on code maintainability 
anyway.

Oh, and you basically have the callback code twice, just with a different `type`
value, could factor this out to a higher-order closure, like:

let getAddGuestCallback = type => {
return () => Ext.create('PVE.pool.AddGuest', {
autoShow: true,
pool: me.pool,
type,
listeners: {
listeners: {
destroy: () => store.reload(),
  

Re: [pve-devel] [PATCH pve-xtermjs v3] termproxy: allow to use unix sockets for auth requests

2024-07-25 Thread Thomas Lamprecht
On 24/07/2024 13:33, Dietmar Maurer wrote:
> Remove ureq, because it does not support unix sockets.
> 
> Signed-off-by: Dietmar Maurer 
> ---
> 
> Changes sinve v2:
>   split out the command line help text change into patch:
>   [PATCH pve-xtermjs] termproxy: fix the command line help text
> 
> Changes since v1:
> 
>  - use extra --authsocket cli option

FYI: I renamed this and the pre-existing --authport option to --auth-socket and
--auth-port, respectively, with the latter getting a fallback to the old variant
for backward compat.

And yeah I know that this in borderline useless as it's just an internal tool,
but I got a few spare minutes in the train where starting something more 
involved
was not justified, and making this consistent with the `--port-as-fd` option
and our modern CLI option style in general felt like an OK clean-up to do.

Anyhow, now bumped and uploaded as proxmox-termproxy in version 1.1.0.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] applied: [PATCH pve-xtermjs] termproxy: fix the command line help text

2024-07-25 Thread Thomas Lamprecht
On 25/07/2024 10:58, Wolfgang Bumiller wrote:
> applied, thanks
> 

this was a good stop-gap but as it wasn't much work I followed this up with
fixing the implementation. As while it's just a internal tool, it's still
nicer to have it behave as all CLI tools:

https://git.proxmox.com/?p=pve-xtermjs.git;a=commitdiff;h=4b8568cb4ff20a87bf2859bb0eb63c5afd3831d7;hp=69400e983e949a4f1a307a755b02af5fcd3efe65


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage] volume import: assume target API version is at least 9

2024-07-25 Thread Thomas Lamprecht
For the record, we talked about this in person for a bit with the following
outcome:

- there was a bit of a misunderstanding w.r.t. my heavy exaggeration for the
  point's sake, I really did not mean that as accusation at all, but that's
  now talked out

- it's a good point that the package that was included in the first 7.0 ISO
  release was already new enough API version wise, and we also checked the
  package archives, and there we also got only new enough libpve-storage-perl
  package versions, so we can keep the change as is.
  We also agreed that it would be great to mention such analysis in the commit
  message the next time, and I know Fiona is very thorough with that stuff, and
  that this time it was just not mentioned due to the difference in how upgrade
  requirements and recommendations got interpreted by her and me, so I mention
  this mostly for other readers, as this applies to all of us.

- we might want to document this expectation w.r.t API backward compat more
  definitively in a more approachable way, but should ensure that its clarified
  that this is for developers, not users, to avoid users thinking its always
  fine to upgrade from an outdated point release to a newer major release.

tl;dr: got sorted out and change can be kept as is


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server] fix #5528: override cgroup methods to call systemd via dbus

2024-07-24 Thread Thomas Lamprecht
Am 09/07/2024 um 11:10 schrieb Wolfgang Bumiller:
> Systemd reapplies its known values on reload, so we cannot simply call
> into PVE::CGroup. Call systemd's SetUnitProperties method via dbus
> instead.
> 
> The hotplug and startup code also calculated different values, as one
> operated within systemd's value framework (documented in
> systemd.resource-control(5)) and one worked with cgroup values
> (distinguishing between cgroup v1 and v2 manually).
> 
> This is now unified by overriding `change_cpu_quota()` and
> `change_cpu_shares()` via `PVE::QemuServer::CGroup` which now takes
> systemd-based values and sends those directly via dbus.
> 
> Signed-off-by: Wolfgang Bumiller 
> ---
>  PVE/QemuServer.pm|  4 ++--
>  PVE/QemuServer/CGroup.pm | 51 +++-
>  2 files changed, 52 insertions(+), 3 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server v2 1/1] fix #5619: honor link_down setting when hot-plugging nic

2024-07-24 Thread Thomas Lamprecht
Am 23/07/2024 um 16:24 schrieb Stefan Hanreich:
> When detaching and attaching the network device on update, the
> link_down setting is not considered and the network device always gets
> attached to the guest - even if link_down is set.
> 
> Fixes: 3f14f206 ("nic online bridge/vlan change: link disconnect/reconnect")
> Signed-off-by: Stefan Hanreich 
> Reviewed-by: Fiona Ebner 
> ---
> Changes from v1 -> v2:
> * Fixed trailers in commit message
> * Re-added comment describing the reason for setting link_down on
>   changed bridge/tag
> 
> Thanks @Fiona for your suggestions!
> 
>  PVE/QemuServer.pm | 14 ++
>  1 file changed, 6 insertions(+), 8 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer v2 16/17] fix #5536: post-hook: add utility for sending notifications after auto-install

2024-07-24 Thread Thomas Lamprecht
Am 23/07/2024 um 16:57 schrieb Aaron Lauterer:
> There are quite a few preparation changes in other sub-crates 
> (auto-installer, installer-common).
> I've only gotten through them for now and haven't looked at the actual 
> post-hook crate stuff.
> 
> Wouldn't it be nicer to split the preparation patches into their own 
> commit? It would make the patch smaller and the history would be a bit 
> clearer as we don't mix adding the post-hook crate with all the 
> preparation that is needed.


This isn't a binary thing, sometimes it can be nicer to have some
preparatory stuff up-front, most often if it's not only affecting new code
and code changes limited to the thing a patch wants to do, otherwise it can
also be nice to see what is needed to make it work, especially as often
preparatory patches cannot be reviewed in isolation anyway and one needs to
look into the patches that they're preparing for anyway to actually be able
to determine if the thing as a whole makes sense the way it's implemented,
split and used.

As mentioned, this is often subjective, both have some advantages and
disadvantages, IMO the best heuristic is the mentioned "how much is existing
code impacted and how much is this related to the semantic thing the patch
wants to do", all else is a bit of a balance that different reviewers might
also disagree a bit with, so it's also fine to reject a split or merge if
you, as patch author, really think it would make it worse – ideally with
some argumentation and avoiding hard lines to avoid "kerfuffle" or getting
less reviews in the future ;-)

unrelated and definitively not only applying just to you, only as a
gentle reminder: I would like to see that replies on the mailing list get
unrelated stuff trimmed out and use of inline/bottom posting, that makes
it quicker to read and reply and also to digest (longer) threads.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH installer v2 11/17] auto-installer: tests: replace manual panic!() with assert_eq!()

2024-07-24 Thread Thomas Lamprecht
Am 23/07/2024 um 13:37 schrieb Christoph Heiss:
> On Tue, Jul 23, 2024 at 01:04:06PM GMT, Aaron Lauterer wrote:
>> Instead of hacking or own pretty print, we could maybe think about using
>> https://crates.io/crates/pretty_assertions
> 
> As discussed offline, I think that this is definitely the way to go.
> Especially when keeping in mind that these tests are going to grow with
> ~every auto-installer feature.

fine by me, it's already packaged and as long as it's used as dev-dependency
and inside `#[cfg(test)]` code it's low in impact anyway.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] applied: [PATCH manager] fix #5010: ceph: pool set only changed properties

2024-07-23 Thread Thomas Lamprecht
Am 23/07/2024 um 09:50 schrieb Aaron Lauterer:
> 
> 
> On  2024-07-22  19:02, Thomas Lamprecht wrote:
>>
>> applied, thanks, one question still inline though.
>>
>>
>>> +   if (defined($current_properties->{$setting}) && $value eq 
>>> $current_properties->{$setting}) {
>> hmm, might this cause trouble (or at least noisy warnings) with properties
>> that are defined as integers in the schema (if we even convert those, not
>> sure from top of my head) or is this always in string form here?
> 
> I might be missing some Perl intricacies here, but in all my tests it 
> worked fine.
> 
> The following test also works:
> 
> perl -e 'use strict; use warnings; my $a = "1"; my $b = 1; if ($a eq $b) 
> { print "YAY" };'
> 
> Even if I set `$b = 1.0`

ah yeah, sorry for the noise, just a problem if the values might be `undef`


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] partially-applied: [PATCH many v9 00/13] notifications: notification metadata matching improvements

2024-07-22 Thread Thomas Lamprecht
Am 08/07/2024 um 11:37 schrieb Lukas Wagner:
> This patch series attempts to improve the user experience when creating
> notification matchers.
> 
> Some of the noteworthy changes:
>   - Allow setting a custom backup job ID, similar how we handle it for
>   sync/prune jobs in PBS (to allow recognizable names used in matchers)
>   - New metadata fields:
> - job-id: Job ID for backup-jobs or replication-jobs
>   - Add an API that enumerates known notification metadata fields/values
>   - Suggest known fields/values in match rule window
>   - Some code clean up for match rule edit window
>   - Extended the 'exact' match-field mode - it now allows setting multiple
> allowed values, separated via ',':
>   e.g. `match-field exact:type=replication,fencing
> Originally, I created a separate 'list' match type for this, but
> since the semantics for a list with one value and 'exact' mode
> are identical, I decided to just extend 'exact'.
> This is a safe change since there are are no values where a ','
> makes any sense (config IDs, hostnames)
> 
> NOTE: Might need a versionened break, since the widget-toolkit-patches
> depend on new APIs provided by pve-manager. If the API is not present,
> creating matchers with 'match-field' does not work (cannot load lists
> of known values/fields)
> 
> Inter-Dependencies:
>   - the widget-toolkit dep in pve-manager needs to be bumped
> to at least 4.1.4
> (we need "utils: add mechanism to add and override translatable 
> notification event
> descriptions in the product specific UIs", otherwise the UI breaks
> with the pve-manager patches applied) --> already included a patch for
> this
>   - widget-toolkit relies on a new API endpoint provided by pve-manager:
> --> we require a versioned break in widget-toolkit on pve-manager

not sure if I really want to do that as it's not really useful for clusters
anyway, where the loaded wtk can be newer than the manager of a (proxied)
node. But in any way: many thanks for mentioning it.

>   - pve-manager needs bumped pve-guest-common (thx @Fabian)
> 
> Changelog:
>   - v9: fix typos in commit message, add @Max's R-b trailer - thx!
>   - v8: incorporate feedback from @Fabian, thx a lot!
> - Made 'job-id' API param usable by root@pam only - this should prevent
>   abuse by spoofing job-id, potentially bothering other users with bogus
>   notifications.
> - Don't set 'job-id' when starting a backup job via 'Run now' in the UI
> - Add a note to the docs explaining when job-id is set and when not.
> - Drop already applied patches
>   - v7: incorporated some more feedback from @Fiona, thx!
> - Fixed error when switching from 'exact' to 'regex' if the text field
>   was empty
> - rebased to latest master
> - 'backport' doc improvements from PBS
> - bumped widget-toolkit dep
>   - v6: incorporate feedback from @Fiona, thx!
> - rename 'id' -> 'job-id' in VZDump API handler
> - consolidate 'replication-job'/'backup-job' to 'job-id'
> - Move 'job-id' setting to advanced tab in backup job edit.
> - Don't use 'internal' flag to mark translatable fields, since
>   the only field where that's necessary is 'type' for now - so
>   just add a hardcoded check
>   - v5:
> - Rebased onto latest master, resolving some small conflict
>   - v4:
> - widget-toolkit: break out changes for the utils module so that they
>   can be applied ahead of time to ease dep bumping
> - don't show Job IDs in the backup/replication job columns
>   - v3:
> - Drop already applied patches for `proxmox`
> - Rebase onto latest master - minor conflict resolution was needed
>   - v2:
> - include 'type' metadata field for forwarded mails
>   --> otherwise it's not possible to match them
> - include Maximilliano's T-b trailer in UI patches
> 
> pve-guest-common:
> 
> Lukas Wagner (1):
>   vzdump: common: allow 'job-id' as a parameter without being in schema
> 
>  src/PVE/VZDump/Common.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> 
> pve-manager:
> 
> Lukas Wagner (5):
>   api: jobs: vzdump: pass job 'job-id' parameter
>   ui: dc: backup: allow to set custom job id in  advanced settings
>   api: notification: add API for getting known metadata fields/values
>   ui: utils: add overrides for translatable notification fields/values
>   d/control: bump proxmox-widget-toolkit dependency to 4.1.4
> 
>  PVE/API2/Backup.pm  |   2 +-
>  PVE/API2/Cluster/Notifications.pm   | 139 
>  PVE/API2/VZDump.pm  |  13 +-
>  PVE/Jobs/VZDump.pm  |   4 +-
>  PVE/VZDump.pm   |   6 +-
>  debian/control  |   2 +-
>  www/manager6/Utils.js   |  11 ++
>  www/manager6/dc/Backup.js   |   4 -
>  www/manager6/panel/BackupAdvancedOptions.js |  23 
>  9 f

[pve-devel] applied: [PATCH qemu-server] fix #5574: api: fix permission check for 'spice' usb port

2024-07-22 Thread Thomas Lamprecht
Am 08/07/2024 um 13:56 schrieb Dominik Csapak:
> With the last change in the permission check, I accidentally broke the
> check for 'spice' host value, since in the if/elsif/else this will fall
> through to the else case which was only intended for when neither 'host'
> nor 'mapping' was set.
> 
> This made 'spice' only settable by root@pam since there we return early.
> 
> To fix this, move the spice check into the 'host' branch, but only error
> out in case it's not spice.
> 
> Fixes: e3971865 (enable cluster mapped USB devices for guests)
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/API2/Qemu.pm | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH manager v9 02/13] api: jobs: vzdump: pass job 'job-id' parameter

2024-07-22 Thread Thomas Lamprecht
Am 08/07/2024 um 11:38 schrieb Lukas Wagner:
> This allows us to access the backup job id in the send_notification
> function, where we can set it as metadata for the notification.
> The 'job-id' parameter can only be used by 'root@pam' to prevent
> abuse. This has the side effect that manually triggered backup jobs
> cannot have the 'job-id' parameter at the moment. To mitigate that,
> manually triggered backup jobs could be changed so that they
> are not performed by a direct API call by the UI, but by requesting
> pvescheduler to execute the job in the near future (similar to how
> manually triggered replication jobs work).
> 
> Signed-off-by: Lukas Wagner 
> Reviewed-by: Max Carrara 
> ---
>  PVE/API2/Backup.pm |  2 +-
>  PVE/API2/VZDump.pm | 13 +++--
>  PVE/Jobs/VZDump.pm |  4 +++-
>  PVE/VZDump.pm  |  6 +++---
>  4 files changed, 18 insertions(+), 7 deletions(-)
> 
>

applied with the d/control bump for guest-common squashed in, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-guest-common v9 01/13] vzdump: common: allow 'job-id' as a parameter without being in schema

2024-07-22 Thread Thomas Lamprecht
Am 08/07/2024 um 11:38 schrieb Lukas Wagner:
> 'job-id' is passed when a backup as started as a job and will be
> passed to the notification system as matchable metadata. It
> can be considered 'internal'.
> 
> Signed-off-by: Lukas Wagner 
> Reviewed-by: Max Carrara 
> ---
>  src/PVE/VZDump/Common.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH common] fix #5529: cgroup: correctly handle change_cpu_quota without a quota

2024-07-22 Thread Thomas Lamprecht
Am 09/07/2024 um 11:09 schrieb Wolfgang Bumiller:
> The function can be called with
> - neither quota nor period
> - only a period (quota will be 'max')
> - both
> 
> $quota was therefore defaulted to 'max' and the check for whether
> values were provided should use $period instead of $quota.
> Also move the defaulting-assignment into the condition for clarity.
> 
> Signed-off-by: Wolfgang Bumiller 
> ---
>  src/PVE/CGroup.pm | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH storage] btrfs: log executed command on failures

2024-07-22 Thread Thomas Lamprecht
Am 09/07/2024 um 13:49 schrieb Maximiliano Sandoval:
> Having the complete command printed out makes debuging easier.
> 
> Signed-off-by: Maximiliano Sandoval 
> ---
>  src/PVE/Storage/BTRFSPlugin.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH proxmox-offline-mirror v2 1/2] build: use cargo wrapper

2024-07-22 Thread Thomas Lamprecht
Am 10/07/2024 um 13:57 schrieb Fabian Grünbichler:
> for package builds to ensure all common flags are actually set.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> 
> Notes:
> v2: symlink wrapper config in place
> 
>  Makefile  |  9 +++--
>  debian/rules  | 11 ---
>  docs/Makefile |  4 ++--
>  3 files changed, 17 insertions(+), 7 deletions(-)
> 
>

applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] fix #5010: ceph: pool set only changed properties

2024-07-22 Thread Thomas Lamprecht
Am 09/07/2024 um 13:41 schrieb Aaron Lauterer:
> By only setting propterties that have changed, we can avoid potential
> errors in the task.
> 
> For example, if one configures the "nosizechange" property on a pool, to
> prevent accidential size changes, the task will now only error if the
> user is actually trying to change the size.
> 
> Prior to this patch, we would always try to set all parameters, even if
> they were the same value. In the above example, this would result in the
> task ending in error state, as we are not allowed to change the size.
> 
> To disable size changing you can run the following command:
> ceph osd pool set {pool} nosizechange 1
> 
> Signed-off-by: Aaron Lauterer 
> ---
> 
> I am not sure if we want to log every property that is skipped. It makes
> visible what is done, but is also a bit noisy.
> 
>  PVE/Ceph/Tools.pm | 8 
>  1 file changed, 8 insertions(+)
> 


applied, thanks, one question still inline though.


> + if (defined($current_properties->{$setting}) && $value eq 
> $current_properties->{$setting}) {

hmm, might this cause trouble (or at least noisy warnings) with properties
that are defined as integers in the schema (if we even convert those, not
sure from top of my head) or is this always in string form here?

> + print "skipping '${setting}', did not change\n";
> + delete $param->{$setting};
> + next;
> + }
> +
>   print "pool $pool: applying $setting = $value\n";
>   if (my $err = $set_pool_setting->($pool, $setting, $value, $rados)) {
>   print "$err";



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH manager 0/2] fix #5570 ceph mds: allow custom MDS IDs from the UI

2024-07-22 Thread Thomas Lamprecht
Am 10/07/2024 um 14:42 schrieb Aaron Lauterer:
> The ID for the MDS cannot start with a number [0]. The first patch adds
> a check for this.
> 
> The second patch is the actual fix, by reworking the edit window when
> adding new MDS'.
> 
> By allowing the users to set the name of the MDS directly, we can catch
> the situation where the hostname starts with a number and let the user
> decide how it should be named. Similar to what they can already do on
> the CLI.
> 
> 
> Another approach would be to leave the UI as is and catch the situation
> in the backend. If an ID that starts with a number is detected, we could
> prepend it with a default string.
> 
> [0] https://docs.ceph.com/en/latest/man/8/ceph-mds/
> 
> Aaron Lauterer (2):
>   api: ceph mds: avoid creating MDS when ID starts with number
>   fix#5570 ui: ceph: make MDS ID configurable
> 
>  PVE/API2/Ceph/MDS.pm |  2 ++
>  www/manager6/ceph/ServiceList.js | 21 +++--
>  2 files changed, 13 insertions(+), 10 deletions(-)
> 


applied with a small s/service_id/serviceID/ style fix squashed into the
UI patch plus added the missing space in the subject between "fix" and the
issue ID, thanks!

Oh, I also made a follow-up [0] that makes the field required now to avoid
an API error if the user clears the field before submitting.<

[0]: 
https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=d69d4ed8cb276b492a5fe7883f94db777e06d2b2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH installer v2 0/6] auto-installer: add option for providing hashed root password

2024-07-22 Thread Thomas Lamprecht
Am 15/07/2024 um 09:56 schrieb Christoph Heiss:
> This series adds a new answer option `global.root_password_hashed`
> for the auto-installer, enabling administrators to specify the root
> password of the new installation in a hashed format - as generated by
> e.g. mkpasswd(1) - instead of plain-text.
> 
> Administrators/users might want to avoid passing along a plain-text
> password with the different answer-fetching methods supported by the
> auto-installer, for obvious reasons.
> 
> While this of course does not provide full security, sending a hashed
> password might still be preferred by administrators over plain text.
> 
> Tested by installing using the GUI and TUI (to ensure no regressions
> can happen) and using the auto-installer, once with `root_password` set
> (again testing for potential regressions) and once with
> `global.root_password_hashed` set instead, testing the new
> functionality.
> 
> First two patches are small cleanups and may be applied independently.
> 
> v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063949.html
> 
> Notable changes v1 -> v2:
>   * rebased on latest master
>   * fixed rebase mistake
>   * merged previous patch #4/#5 for consistency across crates
>   * improved validation in auto-installer
> 
> Christoph Heiss (6):
>   common: move `PasswordOptions` type to tui crate
>   tui-installer: remove `Debug` implementation for password options
>   low-level: change root password option to contain either plaintext or
> hash
>   {auto,tui}-installer: adapt to new `root_password` plain/hashed setup
> option
>   auto-installer: add new `global.root_password_hashed` answer option
>   auto-installer: add test for hashed root password option
> 
>  Proxmox/Install.pm| 25 ---
>  Proxmox/Install/Config.pm | 20 ---
>  proxinstall   |  4 +--
>  proxmox-auto-installer/src/answer.rs  |  3 ++-
>  proxmox-auto-installer/src/utils.rs   | 21 ++--
>  .../resources/parse_answer/disk_match.json|  2 +-
>  .../parse_answer/disk_match_all.json  |  2 +-
>  .../parse_answer/disk_match_any.json  |  2 +-
>  .../parse_answer/hashed_root_password.json| 20 +++
>  .../parse_answer/hashed_root_password.toml| 14 +++
>  .../tests/resources/parse_answer/minimal.json |  2 +-
>  .../resources/parse_answer/nic_matching.json  |  2 +-
>  .../resources/parse_answer/specific_nic.json  |  2 +-
>  .../tests/resources/parse_answer/zfs.json |  2 +-
>  proxmox-installer-common/src/options.rs   | 15 ---
>  proxmox-installer-common/src/setup.rs | 12 +++--
>  proxmox-tui-installer/src/main.rs |  4 +--
>  proxmox-tui-installer/src/options.rs  | 20 ---
>  proxmox-tui-installer/src/setup.rs| 10 ++--
>  19 files changed, 140 insertions(+), 42 deletions(-)
>  create mode 100644 
> proxmox-auto-installer/tests/resources/parse_answer/hashed_root_password.json
>  create mode 100644 
> proxmox-auto-installer/tests/resources/parse_answer/hashed_root_password.toml
> 


applied series while taking the freedom to record Theodor's feedback
in form of a T-b tag, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-manager 2/2] sdn: ipam: fix editing custom mappings

2024-07-22 Thread Thomas Lamprecht
Am 17/07/2024 um 15:06 schrieb Stefan Hanreich:
> Currently custom mappings cannot be edited, due to them having no VMID
> value. The VMID parameter was always sent by the frontend to the
> update call - even if it was empty - leading to validation failure on
> the backend. Fix this by only sending the vmid parameter when it is
> actually set.
> 
> Signed-off-by: Stefan Hanreich 
> ---
>  www/manager6/tree/DhcpTree.js | 17 +++--
>  1 file changed, 11 insertions(+), 6 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server 1/2] config: net: avoid duplicate ipam entries on nic update

2024-07-22 Thread Thomas Lamprecht
Am 17/07/2024 um 15:06 schrieb Stefan Hanreich:
> Updating the NIC of a VM when the following conditions were met:
> * VM is turned off
> * NIC is on a bridge that uses automatic dhcp
> * Leave bridge unchanged
> 
> led to duplicate IPAM entries for the same network device.
> 
> This is due to the fact that the add_next_free_cidr always ran on
> applying pending network changes.
> 
> Now we only add a new ipam entry if either:
> * the value of the bridge or mac address changed
> * the network device has been newly added
> 
> This way no duplicate IPAM entries should get created.
> 
> Signed-off-by: Stefan Hanreich 
> ---
>  PVE/QemuServer.pm | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH manager 1/2] www: utils: fix `maxcpu` validity check in render_hostcpu()

2024-07-22 Thread Thomas Lamprecht
Am 17/07/2024 um 14:49 schrieb Christoph Heiss:
> Comparing with Proxmox.Utils.render_cpu() seems just a slight oversight
> in the condition. Fix it by aligning it with how it is done in
> Proxmox.Utils.render_cpu() for consistency.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  www/manager6/Utils.js | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH manager 1/5] fix typos in comments

2024-07-22 Thread Thomas Lamprecht
Am 17/07/2024 um 14:16 schrieb Maximiliano Sandoval:
> Signed-off-by: Maximiliano Sandoval 
> ---
>  PVE/API2/Nodes.pm   | 2 +-
>  PVE/APLInfo.pm  | 2 +-
>  PVE/CLI/pveceph.pm  | 2 +-
>  PVE/Service/pvestatd.pm | 2 +-
>  PVE/Status/Plugin.pm| 2 +-
>  www/manager6/Parser.js  | 2 +-
>  www/manager6/ceph/Services.js   | 2 +-
>  www/manager6/dc/ACMEPluginEdit.js   | 2 +-
>  www/manager6/form/BandwidthSelector.js  | 2 +-
>  www/manager6/lxc/ResourceEdit.js| 2 +-
>  www/manager6/qemu/OSDefaults.js | 2 +-
>  www/manager6/qemu/PCIEdit.js| 2 +-
>  www/manager6/qemu/ProcessorEdit.js  | 2 +-
>  www/manager6/window/Migrate.js  | 2 +-
>  www/manager6/window/Restore.js  | 2 +-
>  www/manager6/window/TreeSettingsEdit.js | 2 +-
>  www/u2f-api.js  | 2 +-
>  17 files changed, 17 insertions(+), 17 deletions(-)
> 
>

applied series, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH proxmox-firewall 1/3] cargo: bump dependencies of proxmox-ve-config

2024-07-22 Thread Thomas Lamprecht
Am 03/07/2024 um 11:17 schrieb Stefan Hanreich:
> Signed-off-by: Stefan Hanreich 
> ---
>  proxmox-ve-config/Cargo.toml | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied series with some merge conflict in the context addressed for the
second patch and updated the dependencies to the newer versions that got
released since you sent this patch, plus also bumped proxmox-sys for the
proxmox-firewall sub-crate.

thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH network/container/qemu-server v2 0/3] drop unused `firewall` argument to {add, del}_bridge_fdb()

2024-07-22 Thread Thomas Lamprecht
Am 05/07/2023 um 16:38 schrieb Christoph Heiss:
> While working on this code, I noticed that the `firewall` argument is
> never used (nor even declared) [0] in both
> PVE::Network::{add,del}_bridge_fdb().
> 
> Thus drop it everywhere and avoid needlessly passing around things which
> are never used anyway.
> 
> Did some quick smoke-testing and everything kept working fine, but as
> there are really no functional changes, this should not effect anything.
> 
> [ Basically just a re-spin of [1], just refreshed all patches such that
> they apply cleanly again. No actual changes between v1 and v2. ]
> 
> [0] 
> https://git.proxmox.com/?p=pve-common.git;a=blob;f=src/PVE/Network.pm;h=a4f5ba96;hb=HEAD#l299
> [1] https://lists.proxmox.com/pipermail/pve-devel/2023-March/055970.html
> 
> pve-network:
> 
> Christoph Heiss (1):
>   sdn: zones: drop unused `firewall` argument to {add, del}_bridge_fdb()
> 
>  src/PVE/Network/SDN/Zones.pm | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> pve-container:
> 
> Christoph Heiss (1):
>   net: drop unused `firewall` argument to add_bridge_fdb()
> 
>  src/PVE/LXC.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> qemu-server:
> 
> Christoph Heiss (1):
>   net: drop unused `firewall` argument to {add, del}_bridge_fdb()
> 
>  PVE/QemuServer.pm | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> --
> 2.39.2
> 
> 

for the record, this was replaced by a series from Alexandre
https://lore.proxmox.com/pve-devel/20230926073942.3212260-1-aderum...@odiso.com/


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH proxmox-firewall v3 1/1] service: flush firewall rules on force disable

2024-07-22 Thread Thomas Lamprecht
Am 17/07/2024 um 15:16 schrieb Stefan Hanreich:
> When disabling the nftables firewall again, there is a race condition
> where the nftables ruleset never gets flushed and persists after
> disabling.
> 
> The nftables firewall update loop does a noop when the force disable
> file exists. It only flushes the ruleset when nftables is disabled in
> the configuration file but the force disable file does not yet exist.
> 
> This can lead to the following situation:
> 
> * nftables is activated and created its ruleset
> * user switches from nftables firewall back to iptables firewall
> * pve-firewall runs and creates the force disable file
> * proxmox-firewall sees that the file exists and does nothing
> 
> Reported-by: Hannes Laimer 
> Signed-off-by: Stefan Hanreich 
> ---
> Changes from v2 to v3:
> * Use proper debug output formatter
> 
> Changes from v1 to v2:
> * Removed misleading/wrong section about the probability of this
>   happening
> * Added a detailed description of the scenario this commit prevents
> 
>  proxmox-firewall/src/bin/proxmox-firewall.rs | 4 
>  1 file changed, 4 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] partially-applied-series: [PATCH v2 pve-manager 00/10] Ceph Build Commit in UI

2024-07-22 Thread Thomas Lamprecht
Am 01/07/2024 um 16:10 schrieb Max Carrara:
> Ceph Build Commit in UI - Version 2
> ===
> 
> Notable Changes since v1
> 
> 
>   * Use camelCase instead of snake_case for new functions / variables
> as per our style guide [0] (thanks Lukas!)
>   * Refrain from using `const` for things that aren't actual constants
> as per our style guide [1] (thanks Lukas!)
>   * NEW: Patch 09: Increase the default width of the version field in
> the OSD tree so that longer strings are immediately readable without
> needing to adjust the column widths manually
> --> e.g. "18.2.2 (e9fe820e7 -> 69ce99eba)" takes up a lot of space
> in the column
>   * NEW: Patch 10: Include Ceph build commit in the version string
> which is part of the object of the `ceph/osd/{osdid}/metadata` call
> 
> For a detailed list of changes, please see the comments in the
> individual patches.
> 
> NOTE: I added Lukas's T-b and R-b tags to all patches except the new
> ones, as mentioned in a reply to v1 [2].
> 
> Older Versions
> --
> 
> v1: https://lists.proxmox.com/pipermail/pve-devel/2024-April/063772.html
> 
> References
> --
> 
> [0]: https://pve.proxmox.com/wiki/Javascript_Style_Guide#Casing
> [1]: https://pve.proxmox.com/wiki/Javascript_Style_Guide#Variables
> [2]: https://lists.proxmox.com/pipermail/pve-devel/2024-June/064084.html
> 
> Summary of Changes
> --
> 
> Max Carrara (10):
>   ceph: tools: refactor installation check as guard clause
>   ceph: tools: parse Ceph version in separate sub and update regex
>   ceph: services: remove old cluster broadcast
>   ceph: services: refactor version existence check as guard clause
>   utils: align regex of parse_ceph_version with Perl equivalent

applied above 5 clean-up patches already, thanks!

>   ui: ceph: services: parse and display build commit
>   api: ceph: add build commit of host to Ceph osd index endpoint data
>   ui: ceph: osd: rework rendering of version field & show build commit
>   ui: ceph: osd: increase width of version column
>   api: ceph: change version format in OSD metadata endpoint
> 
>  PVE/API2/Ceph/OSD.pm |  9 -
>  PVE/Ceph/Services.pm | 38 ++--
>  PVE/Ceph/Tools.pm| 59 ++--
>  www/manager6/Utils.js| 17 -
>  www/manager6/ceph/OSD.js | 57 +-
>  www/manager6/ceph/ServiceList.js | 32 +
>  www/manager6/ceph/Services.js| 14 +++-
>  7 files changed, 170 insertions(+), 56 deletions(-)
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 pve-manager 06/10] ui: ceph: services: parse and display build commit

2024-07-22 Thread Thomas Lamprecht
Am 01/07/2024 um 16:10 schrieb Max Carrara:
> This commit adds `PVE.Utils.parseCephBuildCommit`, which can be used
> to get the full hash "eccf199d..." in parentheses from a string like
> the following:
> 
>   ceph version 17.2.7 (eccf199d63457659c09677399928203b7903c888) quincy 
> (stable)
> 
> This hash is displayed and taken into account when comparing monitor
> and manager versions in the client. Specifically, the shortened build
> commit is now displayed in parentheses next to the version for both
> monitors and managers like so:
> 
>   18.2.2 (abcd1234)
> 
> Should the build commit of the running service differ from the one
> that's installed on the host, the newer build commit will also be
> shown in parentheses:
> 
>   18.2.2 (abcd1234 -> 5678fedc)
> 
> The icon displayed for running a service with an outdated build is the
> same as for running an outdated version. The conditional display of
> cluster health-related icons remains the same otherwise.
> 
> The Ceph summary view also displays the hash and will show a warning
> if a service is running with a build commit that doesn't match the one
> that's advertised by the host.
> 
> Signed-off-by: Max Carrara 
> Tested-by: Lukas Wagner 
> Reviewed-by: Lukas Wagner 
> ---
> Changes v1 --> v2:
>   * use camelCase instead of snake_case (thanks Lukas!)
>   * use more descriptive variable names (thanks Lukas!)
>   * use `let` instead of `const` for variables where applicable (thanks 
> Lukas!)
> 
>  www/manager6/Utils.js| 14 ++
>  www/manager6/ceph/ServiceList.js | 32 ++--
>  www/manager6/ceph/Services.js| 14 +-
>  3 files changed, 53 insertions(+), 7 deletions(-)
> 
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 74e46694..f2fd0f7e 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -128,6 +128,20 @@ Ext.define('PVE.Utils', {
>   return undefined;
>  },
>  
> +parseCephBuildCommit: function(service) {
> + if (service.ceph_version) {
> + // See PVE/Ceph/Tools.pm - get_local_version
> + const match = service.ceph_version.match(
> + /^ceph.*\sv?(?:\d+(?:\.\d+)+)\s+(?:\(([a-zA-Z0-9]+)\))/,
> + );
> + if (match) {
> + return match[1];
> + }
> + }
> +
> + return undefined;
> +},
> +
>  compare_ceph_versions: function(a, b) {
>   let avers = [];
>   let bvers = [];
> diff --git a/www/manager6/ceph/ServiceList.js 
> b/www/manager6/ceph/ServiceList.js
> index 76710146..d994aa4e 100644
> --- a/www/manager6/ceph/ServiceList.js
> +++ b/www/manager6/ceph/ServiceList.js
> @@ -102,21 +102,41 @@ Ext.define('PVE.node.CephServiceController', {
>   if (value === undefined) {
>   return '';
>   }
> +
> + let buildCommit = PVE.Utils.parseCephBuildCommit(rec.data) ?? '';
> +
>   let view = this.getView();
> - let host = rec.data.host, nodev = [0];
> + let host = rec.data.host;
> +
> + let versionNode = [0];
> + let buildCommitNode = '';
>   if (view.nodeversions[host] !== undefined) {
> - nodev = view.nodeversions[host].version.parts;
> + versionNode = view.nodeversions[host].version.parts;
> + buildCommitNode = view.nodeversions[host].buildcommit;
>   }
>  
> + let bcChanged =

I'd prefer the more telling `buildCommitChanged` variable name here.

> + buildCommit !== '' &&
> + buildCommitNode !== '' &&
> + buildCommit !== buildCommitNode;

above hunk and... 

> +
>   let icon = '';
> - if (PVE.Utils.compare_ceph_versions(view.maxversion, nodev) > 0) {
> + if (PVE.Utils.compare_ceph_versions(view.maxversion, versionNode) > 0) {
>   icon = PVE.Utils.get_ceph_icon_html('HEALTH_UPGRADE');
> - } else if (PVE.Utils.compare_ceph_versions(nodev, value) > 0) {
> + } else if (PVE.Utils.compare_ceph_versions(versionNode, value) > 0) {
>   icon = PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
> - } else if (view.mixedversions) {
> + } else if (view.mixedversions && !bcChanged) {
>   icon = PVE.Utils.get_ceph_icon_html('HEALTH_OK');
>   }
> - return icon + value;
> +
> + let buildCommitPart = buildCommit.substring(0, 9);
> + if (bcChanged) {
> + const arrow = '';
> + icon ||= PVE.Utils.get_ceph_icon_html('HEALTH_OLD');
> + buildCommitPart = `${buildCommit.substring(0, 
> 9)}${arrow}${buildCommitNode.substring(0, 9)}`;
> + }

... most of the above hunk might be better factored out in a helper
function, as this is basically 1:1 duplication here and in patch 08/10.


The function could e.g. take both current and new commits as parameters
and return the rendered build commit (buildCommitPart) and a boolean about
if it should be interpreted as changed (updated?). That could be in form
of an array or object and then destructured here.

also, maybe rendered this as `build ${buildCommit.substring(0

Re: [pve-devel] [pbs-devel] [PATCH manager v2 07/12] api: add routes for webhook notification endpoints

2024-07-22 Thread Thomas Lamprecht
just chiming in on the perl dereference style

Am 22/07/2024 um 11:50 schrieb Max Carrara:
 +  %$webhook_properties,
>>> Would prefer `$webhook_properties->%*` here (postfix dereferencing) -
>>> even though not explicitly stated in our style guide, we use that kind
>>> of syntax for calling subroutines behind a reference, e.g.
>>> `$foo->($arg)` instead of `&$foo($arg)`.
>>>
>> I kinda prefer the brevity of the prefix variant in this case. Are there
>> any pitfalls/problems with the prefix that I'm not aware of? If not, I'd 
>> prefer
>> to keep this as is, I used the syntax in many other spots in this file 😉
> I personally have no hard feelings if you keep it tbh. Postfix
> dereference is mainly useful if you have e.g. a nested hash (or rather,
> makes more sense) because of how the code is usually read. For example,
> 
> %$foo->{bar}->{baz}

IIRC above cannot work, and even if, it still might benefit from being
written as `%{$foo->{bar}->{baz}}`

> 
> vs
> 
> $foo->{bar}->{baz}->%*
> 
> I'd argue that the main benefit is that it's easier to read for people
> who aren't as familiar with Perl, but before this gets too bikesheddy,
> I'm personally fine if you keep it as-is for simple cases like the above
> :P

It can often be a bit easier to read, as you can go from left to right
without having to backtrack to check what the dereferencing actually
affects, though you can get used to both, so of course it can be a bit
subjective.

For variables that are dereferenced as a whole, i.e. not a hash or array
sub-element of them, it's definitely fine to use the `%$foo` style, as it
can't really be confusing, and we already use that all over the place.

For dereferencing a sub-element, I'd slightly prefer the newer variant,
i.e. `$foo->{bar}->%*` for a hash or `$foo->{bar}->@*` for a list.
You could also convert to this variant when touching lines anyway, but I
do not think this is such a big style issue to make that a must or even
do such transformations for their own sake.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[PATCH] guest-agent: document allow-rpcs in config file section

2024-07-18 Thread Thomas Lamprecht
While the `allow-rpcs` option is documented in the CLI options
section, it was missing in the section about the configuration file
syntax.

And while it's mentioned that "the list of keys follows the command line
options", having `block-rpcs` there but not `allow-rpcs` seems like
being a potential source of confusion; and as it's cheap to add let's
just do so.

Signed-off-by: Thomas Lamprecht 
---
 docs/interop/qemu-ga.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/docs/interop/qemu-ga.rst b/docs/interop/qemu-ga.rst
index 72fb75a6f5..dd4245ece8 100644
--- a/docs/interop/qemu-ga.rst
+++ b/docs/interop/qemu-ga.rst
@@ -131,6 +131,7 @@ fsfreeze-hook  string
 statedir   string
 verboseboolean
 block-rpcs string list
+allow-rpcs string list
 =  ===
 
 See also
-- 
2.39.2





[pve-devel] applied: [PATCH proxmox-i18n] es: update translations

2024-07-18 Thread Thomas Lamprecht
Am 18/07/2024 um 11:32 schrieb Maximiliano Sandoval:
> Signed-off-by: Maximiliano Sandoval 
> ---
>  es.po | 98 +++
>  1 file changed, 38 insertions(+), 60 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [RFC pve-storage 01/36] plugin: base: remove old fixme comments

2024-07-17 Thread Thomas Lamprecht
Am 17/07/2024 um 11:39 schrieb Max Carrara:
> These have been around since 2012 - suffice to say they're not needed
> anymore.

That's really not a good argument though? Just because nobody checked
those closely for a long time it does not mean that they became
magically irrelevant.

Look, it can be (and probably _is_) fine to remove them, but stating
that this is fine just because they were not touched since a while is a
rather dangerous tactic. Someone had some thoughts when adding this,
they might be still relevant or not, but it's definitively *not*
"suffice to say" that they aren't just because they have been around
since 2012, (iSCSI) portals and local storage still exist and are not
working really different compared to 12 years ago.

The node restriction FIXME comment can e.g. be removed as we delete any
such restriction in "parse_config", mentioning that as a reason would
make doing so fine, saying "it's old and unchanged" doesn't.

The storage portal one could be argued with not being defined elsewhere
and all use cases being covered by pve-storage-portal-dns, so removing
it won't hurt, especially as we can always recover it from history.

I think your intention quite surely matched those and meant well, but
removing something just because it's old is on its own IMO a bit of a
red flag, so one should get too used to that argumentation style even
if it's for removing comments, or other stuff that won't change semantics.

Anyhow, do not let this demotivate you from your clean-up efforts, they
are still quite appreciated.
While removing dead code is good, the argumentation behind should be
sound, and I only write this long tirade (sorry) as we got bitten by
some innocent looking changes stemming from a similar argumentation in
the past.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH access-control] api: ACL update: fix handling of Permissions.Modify

2024-07-16 Thread Thomas Lamprecht
Am 11/07/2024 um 13:44 schrieb Fabian Grünbichler:
> with 8.x, the scope of non-"Permissions.Modify"-based ACL update privileges
> were reduced (so that users with for example, VM.Allocate on a VM could only
> delegate their own privileges, but not arbitrary other ones). that additional
> logic had a wrong guard and was accidentally triggered for calls where the 
> user
> had the "Permissions.Modify" privilege on the modified ACL path, but without
> propagation set.
> 
> a user with "Permissions.Modify" on a path should be able to set arbitrary
> ACLs for that path, even without propagation.
> 
> reported on the forum:
> 
> https://forum.proxmox.com/threads/privilege-permissions-modify-on-pool-will-not-propagade-to-contained-vms-anymore.151032/

Could be:

Reported on the forum: https://forum.proxmox.com/threads/151032/

> 
> Fixes: 46bfd59dfca655b263d1f905be37d985416717ac ("acls: restrict 
> less-privileged ACL modifications")
> 

please no extra newlines between trailers like Fixes or your S-o-b.

> Signed-off-by: Fabian Grünbichler 
> ---
>  src/PVE/API2/ACL.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
>

applied, with above commit message nits addressed and reflowed to <= 70 cc,
thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH installer 00/14] fix #5536: implement post-(auto-)installation notification mechanism

2024-07-16 Thread Thomas Lamprecht
Am 15/07/2024 um 16:31 schrieb Christoph Heiss:
> With that in mind it definitely could come in handy. Or maybe a separate
> object "disks"/"other-disks"/etc. entirely? So as not have to filter out
> the (non-)bootdisks again on the receiving end.

Could be fine too, albeit I'd slightly prefer a single disk property, as it
feels more naturally to me to filter on properties compared to checking, or
having to combine, different array sets. But not too hard feelings here,
maybe someone else got some (stronger) opinion.

> While at it, the same would IMO make sense for NICs too, since one might
> want to set up additional network devices too.

good point.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] www: utils: fix inconsistency in host cpu usage display in search view

2024-07-16 Thread Thomas Lamprecht
Am 16/07/2024 um 11:31 schrieb Christoph Heiss:
> Between the number of CPUs and the actual label, a space was missing -
> resulting in an inconsistency vs. the "CPU usage" column.
> 
> Also, fix a rather nonsensical check for `maxcpu` above - noticed that
> while comparing the implementation to that of Proxmox.Utils.render_cpu().

can we split this in a different patch? it's rather unrelated.

Also I think the error here was the lacking parenthesis, i.e., the
following minimal change would make the check also correct

if (!(Ext.isNumeric(maxcpu) && maxcpu >= 1)) {

But I still like yours more, just wanted to point out that this was
probably a simple typo or incompletely moving from one variant to
the other, not straight out bogus in intend.

> 
> Signed-off-by: Christoph Heiss 
> ---
>  www/manager6/Utils.js | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index f5608944d..6a0ecc98f 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1073,13 +1073,14 @@ Ext.define('PVE.Utils', {
>   }
>   var maxcpu = node.data.maxcpu || 1;
>  
> - if (!Ext.isNumeric(maxcpu) && (maxcpu >= 1)) {
> + if (!Ext.isNumeric(maxcpu) || maxcpu < 1) {
>   return '';
>   }
>  
>   var per = (record.data.cpu/maxcpu) * record.data.maxcpu * 100;
> + const cpu_label = maxcpu > 1 ? 'CPUs' : 'CPU';
>  
> - return per.toFixed(1) + '% of ' + maxcpu.toString() + (maxcpu > 1 ? 
> 'CPUs' : 'CPU');
> + return `${per.toFixed(1)}% of ${maxcpu} ${cpu_label}`;
>  },
>  
>  render_bandwidth: function(value) {



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox-ve-rs 01/21] debian: add files for packaging

2024-07-16 Thread Thomas Lamprecht
Am 27/06/2024 um 12:41 schrieb Gabriel Goller:
> On 26.06.2024 14:15, Stefan Hanreich wrote:
>> Since we now have a standalone repository for Proxmox VE related
>> crates, add the required files for packaging the crates contained in
>> this repository.
> 
> I know we don't really do this, but could we add a README.rst file here?
> Maybe with a small outline on what this repo contains, who uses it etc.
> 
> 

that'd be fine by me, but please use a README.md, i.e. markdown.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH installer v4 0/4] add check/rename for already-existing ZFS rpool

2024-07-16 Thread Thomas Lamprecht
Am 16/07/2024 um 10:18 schrieb Christoph Heiss:
> Pretty straight forward overall, implements a check for an existing
> `rpool` on the system and ask the user whether they would like to rename
> it, much in the same way as it works for VGs already.
> 
> Without this, the installer would silently create a second (and thus
> conflicting) `rpool` and cause a boot failure after the installation,
> since it does not know which pool to import exactly.
> 
> v1: https://lists.proxmox.com/pipermail/pve-devel/2024-May/063874.html
> v2: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064619.html
> v3: https://lists.proxmox.com/pipermail/pve-devel/2024-July/064635.html
> 
> Notable changes v3 -> v4:
>   * rebased on latest master
>   * rename $response_ok -> $do_rename for clarity, as suggested by Aaron
> 
> Notable changes v2 -> v3:
>   * make low-level option lvm_auto_rename more generic
>   * skip rename prompt in auto-install mode
> 
> Notable changes v1 -> v2:
>   * incorporated Aarons suggestions from v1
>   * rewrote tests to use a pre-defined input instead
>   * moved pool renaming to own subroutine
>   * documented all new subroutines
>   * split out tests into own patch
> 
> Christoph Heiss (4):
>   proxmox: add zfs module for retrieving importable zpool info
>   test: add test cases for new zfs module
>   install: config: rename option lvm_auto_rename ->
> existing_storage_auto_rename
>   low-level: install: check for already-existing `rpool` on install
> 
>  Proxmox/Install.pm|  41 ++-
>  Proxmox/Install/Config.pm |   6 +-
>  Proxmox/Makefile  |   1 +
>  Proxmox/Sys/ZFS.pm| 109 ++
>  proxmox-auto-installer/src/utils.rs   |   2 +-
>  .../resources/parse_answer/disk_match.json|   2 +-
>  .../parse_answer/disk_match_all.json  |   2 +-
>  .../parse_answer/disk_match_any.json  |   2 +-
>  .../tests/resources/parse_answer/minimal.json |   2 +-
>  .../resources/parse_answer/nic_matching.json  |   2 +-
>  .../resources/parse_answer/specific_nic.json  |   2 +-
>  .../tests/resources/parse_answer/zfs.json |   2 +-
>  proxmox-installer-common/src/setup.rs |   2 +-
>  proxmox-tui-installer/src/setup.rs|   2 +-
>  test/Makefile |   7 +-
>  test/zfs-get-pool-list.pl |  57 +
>  16 files changed, 223 insertions(+), 18 deletions(-)
>  create mode 100644 Proxmox/Sys/ZFS.pm
>  create mode 100755 test/zfs-get-pool-list.pl
> 


applied series, thanks!

But I took the liberty to squash in the second patch adding tests into the
first one, while doing preparatory work can sometimes be OK, I'd not split
that up too much. Also mentioned for what this preparatory work is planned
to be used, as such context can be quite helpful when checking out single
commits that don't do anything on their own.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer 00/14] fix #5536: implement post-(auto-)installation notification mechanism

2024-07-15 Thread Thomas Lamprecht
Am 10/07/2024 um 15:27 schrieb Christoph Heiss:
> This implements a mechanism for post-installation "notifications" via a
> POST request [0] when using the auto-installer.
> 
> It's implemented as a separate, small utility to facilitate separation
> of concerns and make the information gathering easier by having it
> isolated in one place.
> 
> Patches #1 through #10 are simply clean-ups, refactors, etc. that were
> done along the way, which do not impact functionality in any way.
> 
> Most interesting here will be patch #12, which adds the actual
> implementation of the post-hook. (Bind-)mounting the installed host
> system is done using the existing `proxmox-chroot` utility, and the HTTP
> POST functionality can fortunately be re-used 1:1 from
> `proxmox-fetch-answer`.
> 
> I've also included an example of how the JSON body (pretty-printed for
> readability) of such a post-installation request would look like below,
> for reference.
> 
> Tested this with both PVE and PBS ISOs, PMG did not (yet) have a
> release with an auto-installation capable ISO. The only product-specific
> code is the version detection in `proxmox-post-hook`, which - since it
> works the same for PVE and PMG - be no obstacle.
> 
> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=5536
> 
> {
>   "debian-version": "12.5",
>   "product-version": "pve-manager/8.2.2/9355359cd7afbae4",
>   "kernel-version": "proxmox-kernel-6.8.4-2-pve-signed",

no hard feelings, but from a gut feeling I'd probably move this to a
"version" object and potentially use a more reduced, easier to parse, value.
Maybe also as an object so that both, a simple X.Y(.Z) release and a full
string are given, as we changed (kernel) packages name in the past, and I
could imagine that there will be a LTS and non LTS variant if we ever rework
on where we base our kernel on, so this might change in the future too.
While the simple X.Y.Z version will more likely stay as is.

And I do not want to move the goal post here to far, but isn't some of this
information potentially interesting to have sent to a metric server?
At least with a low frequency (or just once on every boot) so that one has a
somewhat recent externally saved set of information that can be used to
identify machines more easily and be aware of some changes to correlate
regressions or strange (load) metrics with.

No need to do that now, but maybe something to keep in mind to allow easier
reuse of this.

IMO it's a big plus if we manage to keep information laid out the same way,
or at list in a similar one, wherever it's included. And that doesn't have
to mean that the whole struct has to be shared, maybe it just could be just
a collection of types that stem from common crates outside the installer
(at least in the long run, as said, no need to completely block this on
scope extension now).

>   "boot-type": "bios",

We call this "mode" in the product APIs [0], might potentially make sense
to use the same schema here? Else I'd at least name this boot-mode and use
the same keys.

[0]: https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/status

>   "filesystem": "zfs (RAID1)",
>   "fqdn": "host.domain",
>   "machine-id": "f4bf9711783248b7aaffe3ccbca3e3dc",
>   "bootdisk": [

could be also interesting to get all disks and flag the ones used for booting
with a boolean "bootdisk" flag. E.g. to make additional storage provisioning
later on slightly more convenient.

> {
>   "size": 8589934592,
>   "udev-properties": {
> "DEVNAME": "/dev/vda", [..]
>   }
> },
> {
>   "size": 8589934592,
>   "udev-properties": {
> "DEVNAME": "/dev/vdb", [..]
>   }
> }
>   ],
>   "management-nic": {
> "mac": "de:ad:f0:0d:12:34",
> "address": "10.0.0.10/24",
> "udev-properties": {
>   "INTERFACE": "enp6s18", [..]
> }
>   },
>   "ssh-public-host-keys": {
> "ecdsa": "ecdsa-sha2-nistp256 [..] root@host",
> "ed25519": "ssh-ed25519 [..] root@host",
> "rsa": "ssh-rsa [..] root@host",
>   }
> }
> 
> Christoph Heiss (14):  chroot: print full anyhow message
>   tree-wide: fix some typos
>   tree-wide: collect hardcoded installer runtime directory strings into
> constant
>   common: simplify filesystem type serializing & Display trait impl
>   common: setup: serialize `target_hd` as string explicitly
>   common: split out installer setup files loading functionality
>   debian: strip unused library dependencies
>   fetch-answer: move http-related code to gated module in
> installer-common
>   tree-wide: convert some more crates to use workspace dependencies
>   auto-installer: tests: replace left/right with got/expected in output
>   auto-installer: answer: add `posthook` section
>   fix #5536: add post-hook utility for sending notifications after
> auto-install
>   fix #5536: post-hook: add some unit tests
>   unconfigured.sh: run proxmox-post-hook after successful auto-install
> 
>  Cargo.toml|  1

[pve-devel] applied-series: [PATCH kernel 1/2] add fix for CIFS client memory leak

2024-07-12 Thread Thomas Lamprecht
On 10/07/2024 13:37, Fiona Ebner wrote:
> As reported in the community forum [0], there currently is a memory
> leak in the CIFS client code. Reproduced by running a backup with CIFS
> target storage:
> 
>> while true; do vzdump 101 --storage cifs --prune-backups keep-last=1; echo 3 
>> > /proc/sys/vm/drop_caches; done
> 
> A fix was found on the kernel mailing list tagged for stable v6.6+
> and it does solve the issue, but is not yet included in any (stable)
> kernels.
> 
> [0]: https://forum.proxmox.com/threads/147603/post-682388
> 
> Signed-off-by: Fiona Ebner 
> ---
>  ...ix-pagecache-leak-when-do-writepages.patch | 108 ++
>  1 file changed, 108 insertions(+)
>  create mode 100644 
> patches/kernel/0018-cifs-fix-pagecache-leak-when-do-writepages.patch
> 
>

applied both patches, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH guest-common/qemu-server/manager/docs v4] implement experimental vgpu live migration

2024-07-05 Thread Thomas Lamprecht
Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> guest-common 1-4; qemu-server 1-6; pve-manager 1,2
> are preparations/cleanups mostly and could be applied independently

Well, yes and no, they have some interdependency between themselves, so
not full independent.

It would be great if you would note such inter-patch dependencies also
in each patches' dev/review note section (where patch revision changelog
lives too).

E.g. I cannot apply manager's 1/5: "mapping: pci: include mdev in config
checks" without the guest-common's 4/6: "mapping: pci: check the mdev
configuration on the device too" but I probably can apply manager's 2/5:
"bulk migrate: improve precondition checks", but as I noticed the missing
dependency documentation for the first pair I was now slightly hesitant to
do so without a much bigger amount of work to check this very closely
myself; that is IMO extra reviewer work cost that can be avoided with not
too much work from patch series author.

Also, having that info makes it a bit easier to not miss any d/control
`Depends` or `Breaks` version bump/update.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH qemu-server v4 04/12] stop cleanup: remove unnecessary tpmstate cleanup

2024-07-05 Thread Thomas Lamprecht
Am 06/06/2024 um 11:22 schrieb Dominik Csapak:
> tpmstate0 is already included in `get_vm_volumes`, and our only storage
> plugin that has unmap_volume implemented is the RBDPlugin, where we call
> unmap in `deactivate_volume`. So it's already ummapped by the
> `deactivate_volumes` calls above.
> 
> For third-party storage plugins, it's natural to expect that
> deactivate_volume() would also remove a mapping for the volume just
> like RBDPlugin does.
> 
> While there is an explicit map_volume() call in start_swtpm(), a
> third-party plugin might expect an explicit unmap_volume() call too.
> However, the order of calls right now is
> 1. activate_volume()
> 2. map_volume()
> 3. deactivate_volume()
> 4. unmap_volume()
> 
> Which seems like it could cause problems already for third-party
> plugins relying on an explicit unmap call.
> 
> All that said, it is unlikely that a third-party plugin breaks. If it
> really happens, it can be discussed/adapted to the actual needs still.
> 
> Signed-off-by: Dominik Csapak 
> Acked-by: Fiona Ebner 
> ---
> changes from v3:
> * include rationale for 3rd party plugins (thanks @fiona)
> 
>  PVE/QemuServer.pm | 8 
>  1 file changed, 8 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH guest-common v4 4/6] mapping: pci: check the mdev configuration on the device too

2024-07-05 Thread Thomas Lamprecht
Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> but that lives int he 'global' part of the mapping config, not in a
> specific mapping. To check that, add it to the relevant hashes here.
> 
> Signed-off-by: Dominik Csapak 
> ---
> changes from v3:
> * leave $cfg optional
> 
>  src/PVE/Mapping/PCI.pm | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/src/PVE/Mapping/PCI.pm b/src/PVE/Mapping/PCI.pm
> index aa56496..1b2ac52 100644
> --- a/src/PVE/Mapping/PCI.pm
> +++ b/src/PVE/Mapping/PCI.pm
> @@ -131,7 +131,7 @@ sub options {
>  
>  # checks if the given config is valid for the current node
>  sub assert_valid {
> -my ($name, $mapping) = @_;
> +my ($name, $mapping, $cfg) = @_;

Which config is this? As is its IMO a bit to opaque. I even thought first
that this is the VM config, or well the config of a specific VM PCI
passthrough property, which I would not have liked much as that would add
a coupling, or maybe better said, hidden cyclic dependency between
guest-common and qemu-server.

Naming this such that it's clearer what config we're talking about here
would help to avoid such thought digressions, at least me.

Maybe `$global_mapping_cfg` or `$cluster_mapping_cfg` (as we do not really
use the term "global" much IIRC).

>  
>  my @paths = split(';', $mapping->{path} // '');
>  
> @@ -161,6 +161,12 @@ sub assert_valid {
>  
>   my $configured_props = { $mapping->%{qw(id iommugroup subsystem-id)} };
>  
> + # check mdev from globabl mapping config, if that is given
> + if (defined($cfg)) {
> + $expected_props->{mdev} = $info->{mdev} ? 1 : 0;
> + $configured_props->{mdev} = $cfg->{mdev} ? 1 : 0;
> + }
> +
>   for my $prop (sort keys $expected_props->%*) {
>   next if $prop eq 'iommugroup' && $idx > 0; # check iommu only on 
> the first device
>  



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH guest-common v4 3/6] mapping: pci: make sure all desired properties are checked

2024-07-05 Thread Thomas Lamprecht
Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> by placing all expected properties from the hardware into an 'expected_props'
> and those fromt he config into 'configured_props'
> 
> the names makes clearer what's what, and we can easily extend it, even
> if the data does not come from the mapping (like we'll do with 'mdev')
> 
> Signed-off-by: Dominik Csapak 
> ---
> changes from v3:
> * rebased on split out patches before
> * don't merge keys but add all to expected_props instead
>  src/PVE/Mapping/PCI.pm | 27 ---
>  1 file changed, 16 insertions(+), 11 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH guest-common v4 1/6] mapping: pci: assert_valid: rename cfg to mapping

2024-07-05 Thread Thomas Lamprecht
Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> to make it clearer what it actually is. Also we want to add the
> 'real' config as parameter too, and so it's less confusing.
> 
> Signed-off-by: Dominik Csapak 
> ---
> split out in v4
>  src/PVE/Mapping/PCI.pm | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH guest-common v4 2/6] mapping: pci: assert_valid: reword error messages

2024-07-05 Thread Thomas Lamprecht
Am 06/06/2024 um 11:21 schrieb Dominik Csapak:
> makes them a bit clearer
> 
> Signed-off-by: Dominik Csapak 
> ---
> split out in v4
>  src/PVE/Mapping/PCI.pm | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH installer] auto-installer: fix answer-request charset

2024-07-05 Thread Thomas Lamprecht
Am 26/04/2024 um 10:12 schrieb Fabian Grünbichler:
> 'utf-' is a typo, and can trip up some servers that do strict
> checking/matching.
> 
> Signed-off-by: Fabian Grünbichler 
> ---
> 
> Notes:
> see 
> https://forum.proxmox.com/threads/invalid-charset-on-automated-install-answer-http-fetch.145856/
> 
>  proxmox-fetch-answer/src/fetch_plugins/http.rs | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
>

applied, with Christoph's R-b, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH ifupdown2] patch : addons: vxlan: fix VNI filter on single VXLAN device

2024-07-05 Thread Thomas Lamprecht
Am 19/09/2023 um 16:10 schrieb Alexandre Derumier:
> Requested by a customer using setup with single vxlan devices.
> ---
>  debian/patches/series |  3 ++-
>  .../upstream/0001-vxlan-fix-vni-filter.patch  | 27 +++
>  2 files changed, 29 insertions(+), 1 deletion(-)
>  create mode 100644 debian/patches/upstream/0001-vxlan-fix-vni-filter.patch
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage] volume import: assume target API version is at least 9

2024-07-05 Thread Thomas Lamprecht
Am 04/07/2024 um 14:11 schrieb Fiona Ebner:
> Yes, next time we introduce an apiinfo call, we can just have it fail
> hard upon errors.

Oh, and just to avoid potential future error potential here:
For a new topic-specific API version call that might not work, as the fallback
and (lacking) error handling here was added explicitly for backward
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage] volume import: assume target API version is at least 9

2024-07-04 Thread Thomas Lamprecht
Am 04/07/2024 um 14:11 schrieb Fiona Ebner:
> There is no apiinfo call required anymore. No code is the cleanest kind

Yeah, by the assumption you self choose to use and that I question, so
not really a useful argument.

In practice, users can upgrade a from one major release to the next one,
nothing is really blocking them w.r.t apt, that cannot be said for jumping
releases, so one should definitively have different levels of standard for
"any X to any X + 1" compared to "X to X + >= 2".

What we recommend users (run latest 7 before upgrade) is not necessarily
the exact same boundary of what we should hedge against to reduce negative
feelings of users and resulting work/soothing our support has to do as
a result; I'd think of it more like the minimum and recommended system
requirements.

> of code IMHO.

Less code can be nicer as long as all features and edge cases are still
handled properly and also still easy to read, not code golf minimalism.

Or, to exaggerate for the points' sake, should I just delete all our
(user) error code handling and tell any users complaining that they just
need to do it right the next time? While that would result in quite
a few lines less, it definitively won't help our popularity.

>> Besides that: no, I definitively place UX over some abstract "code 
>> cleanliness"
>> criteria – if, it can be fair to set the barrier such that one should achieve
>> both, but putting UX below code tidiness is IMO just not acceptable.
> 
> I do put UX for users running ancient versions below code cleanliness,
> but not UX for current versions of course.

PVE 7 is still supported, so definitively not ancient! We normally put
in quite a bit of work to ensure that systems talking with other
incompatible versions get a good error, why bother adding versioning
else in the first place? And while there's definitively areas that can
still be improved, that's no argument for going backwards again.

I mean this specific case here might be relatively harmless in effect,
but if you really apply this philosophy to all development here then I
wish you good luck into explaining angry users and our support agents
that had to answer them, why it was good for them to remove some simple
check for code cleanliness, because that should not be relevant if
the users did everything 100% correct. We already need to justify
us too much about not being hacky as a FLOSS solution, after having
to use the shell occasionally any missing/incomplete error handling is
the next big reason for people to complain.
And sure, most alternative (proprietary) solutions are far from being
better, and while it's annoying that some people are hypocrites here,
I do not have anything against being held to a high(er) standard,
as one can really stick out with good UX, which always consists of tons
of little things.

Anyhow, as mentioned, this might not fall back on us here, but I hope
I could convince to not use that philosophy unconditionally, and I
wished for some more background explanation on this in the commit message.

I just cannot think that there could have any negative effect, be it in
using nor maintaining that code for neither users nor developers, if we
just fixed the actual (error handling) behavior of querying the API
version instead, and possibly dropped the real old version checks, i.e.
those not just a single major version apart, in a separate patch.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server v4 05/12] vm_stop_cleanup: add noerr parameter

2024-07-04 Thread Thomas Lamprecht
Am 06/06/2024 um 11:22 schrieb Dominik Csapak:
> and set it on all current users

Hmm, it would have helped me if you stated that this patch is keeping
the semantics, but allow callers to request that the method dies instead
of just warning and silencing such an error.

As I was first, when just reading the commit message, a bit confused why
there was no explanation whatsoever even though the behavior changed
drastically - while sure, for review one can reasonably expect devs to also
read the code, it's IMO still much nicer if one already get the basic
gist from just reading the message, ideally just the subject; especially
when writing d/changelog entries.

The subject could maybe be changed to something like:

"vm stop-cleanup: allow callers to decide error behavior"

(in this specific case it might be even fine to use the literal method
name, but no hard feelings either way)

Looks OK to me otherwise.

> Signed-off-by: Dominik Csapak 

With commit meta stuff addressed consider this:

Reviewed-by: Thomas Lamprecht 



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage] volume import: assume target API version is at least 9

2024-07-04 Thread Thomas Lamprecht
Am 04/07/2024 um 12:28 schrieb Fiona Ebner:
> Am 04.07.24 um 11:52 schrieb Thomas Lamprecht:
>> Am 10/06/2024 um 11:04 schrieb Fiona Ebner:
>>> The storage API version has been bumped to at least 9 since
>>> libpve-storage = 7.0-4. If the source node is on Proxmox VE 8, where
>>> this change will come in, then the target node can be assumed to be
>>> running either Proxmox VE 8 or, during upgrade, the latest version of
>>> Proxmox VE 7.4, so it's safe to assume a storage API version of at
>>> least 9 in all cases.
>>
>> it's fine by me that this was applied, but can we somehow assert this
>> assumption with an early `die if $apiver < 7` ? (maybe don't die if the
>> apiver could not be queried/parsed, i.e. is undef, if there are
>> legitimate situations where this can happen).
>>
> 
> Why version 7? We'd need to distinguish between there not being an

I meant 9, just a brain fart from PVE version 7 vs API version 9.

> apiinfo API call and other errors. The previous code did just continue
> if not being able to query (punting version to 1) and that lead to the
> very issue reported by Maximiliano.

Rethinking this, your fix is mostly side-stepping the actual problem,
and it can only afford to do so due to your assumption taken, but once
one needs to bump the API version next they either just reintroduce the
problem or are forced to actually fix it, both not nice.

If I understand your commit message right, the actual problem was
transient errors when querying the API version, I assume that because
reading:

> [..] where an SSH connection could not always be established, because
> the target's API version would fall back to 1.

As that sounds like the establishing fails because the API version falls
back to 1, not that the API version falls back to 1 due to the former.

If my interpretation is correct then why not repeat those then a few
times and do a hard-fail if it still cannot be queried – if that fails
often there's probably a different underlying issue causing that.

I.e., I'd drop the fallback to 1, as that's safe to assume that all
supported versions have that version call now, so the fallback is not
required anymore, not the checks.

>> While just one major release difference might be seen as enough, we still
>> have users that are doing some more funky stuff on upgrades of clusters,
>> so if it's somewhat easy to assert the assumption, doing so can
>> definitively only help to improve UX.
> 
> Well, we'd have to put back in the apiinfo call for that. If using
> version 9 for the check instead of 7, the only thing it would do is die
> early when replicating back from an upgraded node to a node with
> libpve-storage < 7.0-4
> 
> For offline guest disk migration, the base_snapshot check doesn't matter
> so the only thing the check would catch is migrating back to a node with
> api version < 5 which means libpve-storage-perl < 6.1-6.
> 
> I'd rather have the code cleaner than very slightly improve UX for users
> running ancient versions.

How is your code cleaner?
Besides that: no, I definitively place UX over some abstract "code cleanliness"
criteria – if, it can be fair to set the barrier such that one should achieve
both, but putting UX below code tidiness is IMO just not acceptable.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server v4 01/12] usb: mapping: move implementation of find_on_current_node here

2024-07-04 Thread Thomas Lamprecht
Am 06/06/2024 um 11:22 schrieb Dominik Csapak:
> this was the only user, and it's easy enough
> 
> Signed-off-by: Dominik Csapak 
> ---
> no changes
>  PVE/QemuServer/USB.pm | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/QemuServer/USB.pm b/PVE/QemuServer/USB.pm
> index 49957444..ecd0361d 100644
> --- a/PVE/QemuServer/USB.pm
> +++ b/PVE/QemuServer/USB.pm
> @@ -5,6 +5,7 @@ use warnings;
>  use PVE::QemuServer::PCI qw(print_pci_addr);
>  use PVE::QemuServer::Machine;
>  use PVE::QemuServer::Helpers qw(min_version windows_version);
> +use PVE::INotify;
>  use PVE::JSONSchema;
>  use PVE::Mapping::USB;
>  use base 'Exporter';
> @@ -91,7 +92,9 @@ sub parse_usb_device {
>   $res->{spice} = 1;
>   }
>  } elsif (defined($mapping)) {
> - my $devices = PVE::Mapping::USB::find_on_current_node($mapping);
> + my $config = PVE::Mapping::USB::config();
> + my $node = PVE::INotify::nodename();
> + my $devices = PVE::Mapping::USB::get_node_mapping($config, $mapping, 
> $node);
>   die "USB device mapping not found for '$mapping'\n" if !$devices || 
> !scalar($devices->@*);
>   die "More than one USB mapping per host not supported\n" if 
> scalar($devices->@*) > 1;

fine by me, but above could be still in a separate (private) method here in
this module, reducing the noise a bit here.

Same for the PCI one.

>   eval {



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



  1   2   3   4   5   6   7   8   9   10   >