On 5/5/20 12:02 PM, Thomas Lamprecht wrote:
On 5/5/20 10:27 AM, Fabian Ebner wrote:
by moving the write_config calls from vmconfig_*_pending to their
call sites. The single other call site for update_pct_config in
update_vm is also adapted.
The update_pct_config call lead to a write_config
efault behavior in the backend is to use the
original layout from the backup configuration file, which
makes sense to use as the default in the GUI as well.
Signed-off-by: Fabian Ebner
---
www/manager6/window/Restore.js | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/
On 5/5/20 1:40 PM, Thomas Lamprecht wrote:
On 5/5/20 1:20 PM, Fabian Ebner wrote:
Previously, the blank '' would be passed along and lead to a
parameter verfication failure.
For LXC the default behavior in the backend is to use 'local' as
the storage, so disallow blank
r in the backend is to use the
original layout from the backup configuration file, which
makes sense to use as the default in the GUI as well.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* avoid unnecessary ?-operators
* better emptyText
www/manager6/window/Restore.js | 9 ++---
Signed-off-by: Fabian Ebner
---
www/manager6/Utils.js| 7 +++
www/manager6/qemu/Options.js | 6 +++---
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 0cce81d4..24e7f1e2 100644
--- a/www/manager6/Utils.js
+++ b/ww
Signed-off-by: Fabian Ebner
---
The real issue is that the shared volumes are scanned here and
that happens in the scan_volids call above. I'll try to address
that as part of the sync_disks cleanup I'm working on.
PVE/QemuMigrate.pm | 4 +++-
1 file changed, 3 insertions(+),
On 5/12/20 3:45 PM, Mira Limbeck wrote:
For better warnings regarding replicated disks and the ignored target
storage, add the 'is_replicated' field to the migration check result.
This contains the result of the replication checks. The first one checks if
the VM is replicated, and the second one
On 5/12/20 3:45 PM, Mira Limbeck wrote:
Replicated disks can only be live migrated to the same storage on the
target node. Add a warning that mentions that limitation. The warning is
only printed when the target node is a replication target. When the
target node is not a replication target, the o
Partially fixes #2728 (GUI part is still needed).
Signed-off-by: Fabian Ebner
---
PVE/API2/Qemu.pm | 6 ++
1 file changed, 6 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index fd51bf3..8e993a9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3486,6 +3486,12
Signed-off-by: Fabian Ebner
---
Changes from v1:
* die/warn depending on force (thanks to Thomas and Aaron for the
suggestion)
* don't die/warn if VM is not replicated at all
PVE/API2/Qemu.pm | 13 +
1 file changed, 13 insertions(+)
diff --git a/PVE/API2/Qemu.pm
The backend treats an undefined value and 0 differently. If the option
is undefined, it will still be set for Windows in config_to_command.
Replace the checkbox with a combobox covering all options.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* use a combobox with all options to allow
Pass new size directly, so the function doesn't need to know about
how some hash is organized. And return a message directly, instead
of both size-strings. Also dropped the wantarray, because both
existing callers use the message anyways.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigra
by using the information obtained in the first scan. This
also makes sure we only scan local storages.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e65b28f..3b138c4
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 55 ++
1 file changed, 31 insertions(+), 24 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 3b138c4..152cb25 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
It is enough to call get_bandwith_limit once for each source_storage.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 22 +-
1 file changed, 9 insertions(+), 13 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 152cb25..777ba2e 100644
--- a/PVE
elf->{online_local_volumes}, and hence is the place
to look for which volumes we need to remove. Of course, replicated
volumes still need to be skipped.
Signed-off-by: Fabian Ebner
---
Who needs phase3 anyways ;)?
PVE/QemuMigrate.pm | 45 -
1 file c
by using the information from volume_map. Call cleanup_remotedisks in
phase1_cleanup as well, because that's where we end if sync_disks
fails and some disks might already have been transfered successfully.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 19 ++-
1
This makes sure that they are present in volume_map as soon as
the remote node tells us, that they have been allocated.
Signed-off-by: Fabian Ebner
---
Makes the cleanup_remotedisks simplyfication in the next patch possible.
Another idea would be to do it in its own loop, after obtaining the
Like this we don't need to worry about auto-vivifaction.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 24 +++-
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a6f42df..49a0e03 100644
---
by making local_volumes class-accessible. One functions is for scanning all
local
volumes and one is for actually syncing offline volumes via storage_migrate.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 98 ++
1 file changed, 64 insertions
Signed-off-by: Fabian Ebner
---
This is a re-send of a previously stand-alone patch.
PVE/QemuMigrate.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index b729940..f6baeda 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
by extending filter_local_volumes.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 38 +-
1 file changed, 21 insertions(+), 17 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 49a0e03..e7d16c7 100644
--- a/PVE/QemuMigrate.pm
+++ b
merge them somehow.
But before thinking too much about those things I wanted
to get some feedback for this and ask if this is the
right direction to go in.
Fabian Ebner (11):
sync_disks: fix check
update_disksize: make interface leaner
Split sync_disks into two functions
Avoid re-s
This reverts commit 95015dbbf24b710011965805e689c03923fb830c.
parse_volname always gives 'images' and not 'rootdir'. In most
cases the volume name alone does not contain the needed information,
e.g. vm-123-disk-0 can be both a VM volume or a container volume.
Signed-
Signed-off-by: Fabian Ebner
---
PVE/Storage/CIFSPlugin.pm | 1 +
PVE/Storage/CephFSPlugin.pm| 1 +
PVE/Storage/DirPlugin.pm | 5 ++--
PVE/Storage/GlusterfsPlugin.pm | 5 ++--
PVE/Storage/NFSPlugin.pm | 5 ++--
PVE/Storage/PBSPlugin.pm | 1 +
PVE/Storage/Plugin.pm
Implement it for generic storages supporting backups (i.e.
directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 27 -
PVE/Storage/PBSPlugin.pm | 50
PVE/Storage/Plugin.pm | 128
test
Signed-off-by: Fabian Ebner
---
Not sure if this is the best place for the new API endpoints.
I decided to opt for two distinct calls rather than just using a
--dry-run option and use a worker for actually pruning, because
removing many backups over the network might take a while.
PVE/API2
ely '{volume}', so it's not possible to create endpoints like
'{storage}/content/prunebackups'. Instead, I introduced
'{storage}/prunebackups'.
Fabian Ebner (6):
PBSPlugin: list_volumes: filter by vmid if specified
Expand archive_info to include ctime, v
where 'is_std_name' shows whether the backup name uses the standard naming
schema and most likely was created by our tools.
Also adds a '^' to the existing filename matching regex, which
should be fine since basename() is used beforehand.
Signed-off-by: Fabian Ebner
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Status.pm | 65 +++---
1 file changed, 32 insertions(+), 33 deletions(-)
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 14f5930..d9d9b36 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE
Signed-off-by: Fabian Ebner
---
PVE/Storage/PBSPlugin.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
index 3c0879c..65696f4 100644
--- a/PVE/Storage/PBSPlugin.pm
+++ b/PVE/Storage/PBSPlugin.pm
@@ -291,6 +291,7 @@ sub list_volumes
Any feedback for these patches?
On 5/4/20 10:50 AM, Fabian Ebner wrote:
The size of VM state files and the size of unused disks not
referenced by any snapshot is not saved in the VM configuration,
so it's not available here either.
Signed-off-by: Fabian Ebner
---
Changes from v1:
On 6/4/20 11:08 AM, Fabian Ebner wrote:
Implement it for generic storages supporting backups (i.e.
directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 27 -
PVE/Storage/PBSPlugin.pm | 50
PVE/Storage/Plugin.pm
Signed-off-by: Fabian Ebner
---
PVE/Storage/CIFSPlugin.pm | 1 +
PVE/Storage/CephFSPlugin.pm| 1 +
PVE/Storage/DirPlugin.pm | 5 ++--
PVE/Storage/GlusterfsPlugin.pm | 5 ++--
PVE/Storage/NFSPlugin.pm | 5 ++--
PVE/Storage/PBSPlugin.pm | 1 +
PVE/Storage/Plugin.pm
dumpdir will be overwritten if a storage is specified.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index bdbf641e..bc4ac751 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -449,7
to avoid some code duplication.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 23 +--
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 6d68ac34..8ef9fbf0 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
No functional change is intended.
The preference order is: option, then storage config, then vzdump defaults.
Signed-off-by: Fabian Ebner
---
New in v2
IMHO the old method was very confusing.
PVE/VZDump.pm | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/PVE
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index bc4ac751..12c02a2a 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -84,19 +84,18 @@ sub storage_info
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage/PBSPlugin.pm | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
index fba4b2b..f029e55 100644
--- a/PVE/Storage/PBSPlugin.pm
+++ b/PVE/Storage/PBSPlugin.pm
to keep the removal of the archive and its log file together.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index ac0dccd..a459572 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Status.pm | 65 +++---
1 file changed, 32 insertions(+), 33 deletions(-)
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 14f5930..d9d9b36 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE
Add a test case for this.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage.pm| 13 -
test/archive_info_test.pm | 22 ++
2 files changed, 30 insertions(+), 5 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 07a4f53..ac0dccd 1
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Makefile| 2 +-
PVE/API2/Storage/PruneBackups.pm | 153 +++
PVE/API2/Storage/Status.pm | 7 ++
PVE/CLI/pvesm.pm | 27 ++
4 files changed, 188 insertions(+), 1 deletion(-)
create
e's a regex below '{storage}/content',
namely '{volume}', so it's not possible to create endpoints like
'{storage}/content/prunebackups'. Instead, I introduced
'{storage}/prunebackups'.
A dependency bump 'manager -> storage' is needed for
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
Changes in v2:
* Return actual volid in PBS using the new print_volid helper
* Split out prune_mark_backup_group and move it to Storage.pm
For the use case with '--dumpdir', it's not possible to call prune_backups
directly, so a little bit of special handling is required there.
Note that $opts->{'prune-backups'} is always defined after new()
Signed-off-by: Fabian Ebner
---
Ne
e next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/VZDump.pm | 83 +++
1 f
On 6/15/20 2:01 PM, Thomas Lamprecht wrote:
Am 6/10/20 um 1:23 PM schrieb Fabian Ebner:
to keep the removal of the archive and its log file together.
Signed-off-by: Fabian Ebner
---
New in v2
PVE/Storage.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/Storage.pm b
On 6/15/20 2:21 PM, Thomas Lamprecht wrote:
Am 6/10/20 um 1:23 PM schrieb Fabian Ebner:
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
Changes in v2:
* Return actual volid in PBS using the
dumpdir will be overwritten if a storage is specified
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index dceeb9ca..ce8796d9 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -449,7 +449,9
Signed-off-by: Fabian Ebner
---
PVE/Storage/PBSPlugin.pm | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/PVE/Storage/PBSPlugin.pm b/PVE/Storage/PBSPlugin.pm
index fba4b2b..f029e55 100644
--- a/PVE/Storage/PBSPlugin.pm
+++ b/PVE/Storage/PBSPlugin.pm
@@ -88,6
Signed-off-by: Fabian Ebner
---
PVE/Storage/CIFSPlugin.pm | 1 +
PVE/Storage/CephFSPlugin.pm| 1 +
PVE/Storage/DirPlugin.pm | 5 ++--
PVE/Storage/GlusterfsPlugin.pm | 5 ++--
PVE/Storage/NFSPlugin.pm | 5 ++--
PVE/Storage/PBSPlugin.pm | 1 +
PVE/Storage/Plugin.pm
7;). Add a test case for this.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm| 13 -
test/archive_info_test.pm | 22 ++
2 files changed, 30 insertions(+), 5 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 07a4f53..ac0dccd 100755
--- a
/content/prunebackups'. Instead, I introduced
'{storage}/prunebackups'.
A dependency bump 'manager -> storage' is needed for patches #11-#13.
storage:
Fabian Ebner (7):
Introduce prune-backups property for directory-based storages
Extend archive_info to include fil
Signed-off-by: Fabian Ebner
---
Changes in v3:
* die if unlink of archive fails
* check whether log file exists before trying to unlink it
* warn if unlink of log file fails
PVE/Storage.pm | 17 +
1 file changed, 17 insertions(+)
diff --git a/PVE/Storage.pm b/PVE
e next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 83 +++
1 file changed, 58 in
No functional change is intended.
The preference order is: option, then storage config, then vzdump defaults.
Signed-off-by: Fabian Ebner
---
IMHO the old method was very confusing.
PVE/VZDump.pm | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/PVE/VZDump.pm b
Implement it for generic storages supporting backups
(i.e. directory-based storages) and add a wrapper for PBS.
Signed-off-by: Fabian Ebner
---
Changes in v3:
* When checking if all keep-options are 0, improve readability
by using hash values directly
* For creation times in
to avoid some code duplication.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 23 +--
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index b1107eac..e8669665 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -631,10 +631,15
For the use case with '--dumpdir', it's not possible to call prune_backups
directly, so a little bit of special handling is required there.
Signed-off-by: Fabian Ebner
---
Note that $opts->{'prune-backups'} is always defined after
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Makefile| 2 +-
PVE/API2/Storage/PruneBackups.pm | 153 +++
PVE/API2/Storage/Status.pm | 7 ++
PVE/CLI/pvesm.pm | 27 ++
4 files changed, 188 insertions(+), 1 deletion(-)
create
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 19 +--
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index ce8796d9..9bdb5ab0 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -84,19 +84,18 @@ sub storage_info {
PVE::Storage
Signed-off-by: Fabian Ebner
---
PVE/API2/Storage/Status.pm | 65 +++---
1 file changed, 32 insertions(+), 33 deletions(-)
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 14f5930..d9d9b36 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 96de0db..cd4a005 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -1179,16 +1179,9 @@ sub phase3_cleanup
Allows to mock moving the configuration for testing
and reduces duplication between the migration modules by a tiny amount.
Signed-off-by: Fabian Ebner
---
Dependency bumps
container,qemu-server -> guest-common
are needed
PVE/AbstractConfig.pm | 11 +++
1 file changed, 11 inserti
Signed-off-by: Fabian Ebner
---
I felt like this makes sense as a single block now (without each
line being separated by a blank), but I can send a v2 without that style
change if you want. Same for the next patch.
src/PVE/LXC/Migrate.pm | 12 ++--
1 file changed, 2 insertions(+), 10
nded' lock, but apparently [0] we cannot rely on the lock to be
set if and only if there is a vmstate.
[0]: https://forum.proxmox.com/threads/task-error-start-failed.72450
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a
Otherwise two blank lines between sections cause the loop to end prematurely.
Signed-off-by: Fabian Ebner
---
src/PVE/SectionConfig.pm | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/PVE/SectionConfig.pm b/src/PVE/SectionConfig.pm
index dcecce6..1bb285f 100644
--- a
401 - 468 of 468 matches
Mail list logo