Re: [pve-devel] pve-manager : task log detail not auto scrolling on google-chrome

2016-06-17 Thread Alexandre DERUMIER
seem to work on chromium , strange (both version 51) - Mail original - De: "aderumier" À: "pve-devel" Envoyé: Vendredi 17 Juin 2016 16:38:10 Objet: [pve-devel] pve-manager : task log detail not auto scrolling on google-chrome Hi,

[pve-devel] [PATCH ha-manager v3 2/3] Manager: record tried node on relocation policy

2016-06-17 Thread Thomas Lamprecht
Instead of counting up an integer on each failed start trial, record the already tried nodes. We can then use the size of the trial record array as 'try count' and achieve so the same behaviour as with the 'relocate_trial' hash earlier. Log the tried nodes after the service started or if it could

[pve-devel] [PATCH ha-manager v3 3/3] relocate policy: try to avoid already failed nodes

2016-06-17 Thread Thomas Lamprecht
If the failure policy triggers more often than 2 times we used an already tried node again, even if there where other untried nodes, we then cycled between those two nodes, if the active service count did not change. This does not make real sense as when it failed to start on a node a short time

[pve-devel] [PATCH ha-manager v3 1/3] cleanup manager status on start

2016-06-17 Thread Thomas Lamprecht
Cleanup the manager state if we get promoted to manager to avoid deletions of deprecated hash entries. Just safe: * service status: as it may contain unprocessed results * relocate tried nodes: so we do not accidentally restart the start failure policy on a manager restart (e.g. update) *

[pve-devel] [PATCH ha-manager v3 0/3] start failure policy patches

2016-06-17 Thread Thomas Lamprecht
v3 from the start failure/relocate policy patches Addressed issues mentioned by Dietmar. Added a new patch which cleans up the manager status when a CRM becomes manager, more info in the commit message. Thomas Lamprecht (3): cleanup manager status on start Manager: record tried node on

[pve-devel] pve-manager : task log detail not auto scrolling on google-chrome

2016-06-17 Thread Alexandre DERUMIER
Hi, I just notice today when doing a storage migration, that task log détail don't autoscroll on google-chrome. Works fine with firefox. Alexandre ___ pve-devel mailing list pve-devel@pve.proxmox.com

Re: [pve-devel] [PATCH qemu-server 0/5] QemuMigrate cleanup

2016-06-17 Thread Dietmar Maurer
applied - thanks! ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

[pve-devel] [PATCH qemu-server 4/5] use foreach_drive instead of foreach_volid

2016-06-17 Thread Fabian Grünbichler
foreach_volid recurses over snapshots as well, resulting in lots of repeated checks (especially for VMs with lots of snapshots and disks). a potential vmstate volume must be checked explicitly, because foreach_drive does not care about those. --- PVE/QemuMigrate.pm | 12 ++-- 1 file

[pve-devel] [PATCH qemu-server 0/5] QemuMigrate cleanup

2016-06-17 Thread Fabian Grünbichler
this patch series does not change the semantics, but cleans up the code to avoid repeated checks. tested using ceph, lvm-thin, zfs in various combinations. Fabian Grünbichler (5): add @param to foreach_drive don't repeat storage check for each volid fix whitespace/indent use

[pve-devel] [PATCH qemu-server 2/5] don't repeat storage check for each volid

2016-06-17 Thread Fabian Grünbichler
--- PVE/QemuMigrate.pm | 11 +++ 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 3e90a46..e42e5b1 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -237,14 +237,17 @@ sub sync_disks { # get list from

[pve-devel] [PATCH qemu-server 1/5] add @param to foreach_drive

2016-06-17 Thread Fabian Grünbichler
--- PVE/QemuServer.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 4226f50..9f3cc0c 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -2645,7 +2645,7 @@ sub vmstatus { } sub foreach_drive { -my ($conf, $func) =

[pve-devel] [PATCH qemu-server 3/5] fix whitespace/indent

2016-06-17 Thread Fabian Grünbichler
--- Note: as a separate patch for readability PVE/QemuMigrate.pm | 22 +++--- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index e42e5b1..5bfdc05 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -230,26 +230,26

[pve-devel] [PATCH qemu-server 5/5] drop unncessary cdromhash

2016-06-17 Thread Fabian Grünbichler
--- PVE/QemuMigrate.pm | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index e7a1d8a..1305b5c 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -225,7 +225,6 @@ sub sync_disks { eval { my $volhash = {}; -

[pve-devel] [PATCH] fix #1033 storage mig on LVMThin add die.

2016-06-17 Thread Wolfgang Link
This is necessary to ensure the process will proper finished. --- PVE/Storage.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index bb35b32..011c4f3 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -569,6 +569,7 @@ sub storage_migrate { if (my

Re: [pve-devel] qemu drive-mirror with zeroinit and ceph/rbd, not working

2016-06-17 Thread Alexandre DERUMIER
>>I'll do test from a local raw Ok, it's working fine with a local raw. So it's a nfs limitation I wonder if we could add a qga "guest-fstrim" call at the end of drive mirror, if qemu disk have discard option enable ? - Mail original - De: "aderumier" À:

Re: [pve-devel] qemu drive-mirror with zeroinit and ceph/rbd, not working

2016-06-17 Thread Alexandre DERUMIER
>>the underlying file system supports SEEK_DATA/SEEK_HOLE then it should work. mmm, this was from a nfs server. I'll do test from a local raw - Mail original - De: "Wolfgang Bumiller" À: "aderumier" , "pve-devel"

Re: [pve-devel] qemu drive-mirror with zeroinit and ceph/rbd, not working

2016-06-17 Thread Wolfgang Bumiller
> On June 17, 2016 at 10:01 AM Alexandre DERUMIER wrote: > > > Hi, > > I have tested to drive-mirror an .raw file to ceph/rbd, with move disk > feature, > > and it seem that the target rbd volume is full allocated. > > I'm not sure how is working the zeroinit patch ?

[pve-devel] qemu drive-mirror with zeroinit and ceph/rbd, not working

2016-06-17 Thread Alexandre DERUMIER
Hi, I have tested to drive-mirror an .raw file to ceph/rbd, with move disk feature, and it seem that the target rbd volume is full allocated. I'm not sure how is working the zeroinit patch ? I'm seeing that it's enable in rbdplugin. Alexandre ___

Re: [pve-devel] /var/lib/vz as a dataset

2016-06-17 Thread Andreas Steinel
I also tried that, yet depending on the order of mounting and starting of some PVE daemons, the files get recreated and then the ZFS could not be mounted again. ___ pve-devel mailing list pve-devel@pve.proxmox.com