On 3/18/20 2:10 PM, Fabian Grünbichler wrote:
On March 18, 2020 11:57 am, Aaron Lauterer wrote:


On 3/17/20 3:33 PM, Fabian Grünbichler wrote:
On March 16, 2020 4:44 pm, Aaron Lauterer wrote:
This extracts the logic which guests are to be included in a backup job
into its own method 'get_included_guests'. This makes it possible to
develop other features around backup jobs.

Logic which was spread out accross the API2/VZDump.pm file and the
VZDump.pm file is refactored into the new method, most notably the
logic that dealt with excluded guests.

The new method will return the VMIDs for the whole cluster. If a VMID is
present on the local node and thus part of the local backup job is still
determined in the VZDump::exec_backup method.

this is not true. previously, that was handled partly by the API
already, so behaviour changed in some cases.


Permission checks are kept where they are to be able to handle missing
permissions according to the current context. The old behavior to die
on a backup job when the user is missing the permission to a guest and
the job is not an 'all' or 'exclude' job is kept.

The creation of the skiplist is moved to the VZDump::exec_backup method,
close to the place where it is used.

but it was used already in the API, so it made sense to have it there.
see comments in-line. I am not saying we can't change this behaviour,
but from this description it does not sound like it was (fully)
intentional.

Thanks for the feedback!

It seems like the early exit right after the skiplist creation got lost
while I was working on the whole thing :/
Also thanks to point out the taskid thing for a single guest backup
further down.

The idea driving this patch is to have one place to decide which guests
will be included in a backup job so it can be used in more places than
just the actual backups. Right now that logic is very spread out and
sometimes even a bit redundant AFAICT.

For example in the PVE/API2/VZDump.pm where @vmids will contain only
local vmids if the `all` parameter is not set and then in
PVE/VZDump->exec_backup the vmids are checked again against
$plugin->vmlist().

After going through all the changes and existing code once more I am not
sure if we should even try to keep the exact same behavior in place.

What if the new PVE/VZDump->get_included_guests sub will return
* all included VMIDs
* local VMIDs
* local skiplist
for the current node?

yes, or just included/local and skipped/external. a caller interested in
all of them can just merge those two mutually exclusive lists, a caller
interested only in the former (e.g., vzdump with --all) can just ignore
the latter, the regular vzdump call can just pass them both along so
that exec_backup can print a skiplist and act on the filtered one.

Sounds good. Then I will refactor it to that.


This way the logic could be nicely contained in one sub instead of being
spread out over multiple modules and subs. We can still exit quietly if
there are no local VMIDs and will do so even in other situations like
when `all` is set but there are either no guests on the local node or
they are all in the `exclude` list.

On the other hand there will be a bit higher computational load to
generate these lists all at once and not spread out with early exits
like now.

I don't see how the load would be much higher? we are talking about
calls that either end up doing a backup, or end up parsing the configs
of all the returned VMIDs. in both cases, iterating once over the
(cached) vmlist provided by pmxcfs to find out which guest is owned by
which node is very low-effort. this is not in any performance-critical
hot-path ;)

I was thinking about clusters with several hundred to maybe a low 4 digit number of guests. But yeah, even then it should be negligible.


I hope I did not miss anything.

_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to