seem to work on chromium , strange (both version 51)
- Mail original -
De: "aderumier"
À: "pve-devel"
Envoyé: Vendredi 17 Juin 2016 16:38:10
Objet: [pve-devel] pve-manager : task log detail not auto scrolling on
google-chrome
Hi,
Instead of counting up an integer on each failed start trial, record
the already tried nodes. We can then use the size of the trial
record array as 'try count' and achieve so the same behaviour as with
the 'relocate_trial' hash earlier.
Log the tried nodes after the service started or if it could
If the failure policy triggers more often than 2 times we used an
already tried node again, even if there where other untried nodes,
we then cycled between those two nodes, if the active service count
did not change.
This does not make real sense as when it failed to start on a node
a short time
Cleanup the manager state if we get promoted to manager to avoid
deletions of deprecated hash entries.
Just safe:
* service status: as it may contain unprocessed results
* relocate tried nodes: so we do not accidentally restart the start
failure policy on a manager restart (e.g. update)
*
v3 from the start failure/relocate policy patches
Addressed issues mentioned by Dietmar.
Added a new patch which cleans up the manager status when a CRM becomes manager,
more info in the commit message.
Thomas Lamprecht (3):
cleanup manager status on start
Manager: record tried node on
Hi,
I just notice today when doing a storage migration, that task log détail don't
autoscroll on google-chrome.
Works fine with firefox.
Alexandre
___
pve-devel mailing list
pve-devel@pve.proxmox.com
applied - thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
foreach_volid recurses over snapshots as well, resulting in
lots of repeated checks (especially for VMs with lots of
snapshots and disks).
a potential vmstate volume must be checked explicitly,
because foreach_drive does not care about those.
---
PVE/QemuMigrate.pm | 12 ++--
1 file
this patch series does not change the semantics, but cleans up the code
to avoid repeated checks.
tested using ceph, lvm-thin, zfs in various combinations.
Fabian Grünbichler (5):
add @param to foreach_drive
don't repeat storage check for each volid
fix whitespace/indent
use
---
PVE/QemuMigrate.pm | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 3e90a46..e42e5b1 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -237,14 +237,17 @@ sub sync_disks {
# get list from
---
PVE/QemuServer.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4226f50..9f3cc0c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2645,7 +2645,7 @@ sub vmstatus {
}
sub foreach_drive {
-my ($conf, $func) =
---
Note: as a separate patch for readability
PVE/QemuMigrate.pm | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e42e5b1..5bfdc05 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -230,26 +230,26
---
PVE/QemuMigrate.pm | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e7a1d8a..1305b5c 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -225,7 +225,6 @@ sub sync_disks {
eval {
my $volhash = {};
-
This is necessary to ensure the process will proper finished.
---
PVE/Storage.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index bb35b32..011c4f3 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -569,6 +569,7 @@ sub storage_migrate {
if (my
>>I'll do test from a local raw
Ok, it's working fine with a local raw.
So it's a nfs limitation
I wonder if we could add a qga "guest-fstrim" call at the end of drive mirror,
if qemu disk have discard option enable ?
- Mail original -
De: "aderumier"
À:
>>the underlying file system supports SEEK_DATA/SEEK_HOLE then it should work.
mmm, this was from a nfs server.
I'll do test from a local raw
- Mail original -
De: "Wolfgang Bumiller"
À: "aderumier" , "pve-devel"
> On June 17, 2016 at 10:01 AM Alexandre DERUMIER wrote:
>
>
> Hi,
>
> I have tested to drive-mirror an .raw file to ceph/rbd, with move disk
> feature,
>
> and it seem that the target rbd volume is full allocated.
>
> I'm not sure how is working the zeroinit patch ?
Hi,
I have tested to drive-mirror an .raw file to ceph/rbd, with move disk feature,
and it seem that the target rbd volume is full allocated.
I'm not sure how is working the zeroinit patch ?
I'm seeing that it's enable in rbdplugin.
Alexandre
___
I also tried that, yet depending on the order of mounting and starting of
some PVE daemons, the files get recreated and then the ZFS could not be
mounted again.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
19 matches
Mail list logo