Re: [pve-devel] [PATCH] disable kvm_steal_time

2015-09-28 Thread Dietmar Maurer
applied. On 09/28/2015 09:56 AM, Alexandre Derumier wrote: It's currently buggy with live migration https://bugs.launchpad.net/qemu/+bug/1494350 Signed-off-by: Alexandre Derumier --- PVE/QemuServer.pm | 1 + 1 file changed, 1 insertion(+) diff --git

Re: [pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Cesar Peschiera
This is not a critical bug it merely affects network performance. Ok, but i guess that if we have several VMs into a Server, the problem be multiplied ___ pve-devel mailing list pve-devel@pve.proxmox.com

Re: [pve-devel] Qemu-img thin provision

2015-09-28 Thread Dietmar Maurer
> The way to solve it once and for all in a way that works with all MUA's > is this: > > Old-Reply-To: original sender > Reply-To: pve-devel@pve.proxmox.com > Precedence: list > List-Post: Sigh - seems 'Reply

Re: [pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Michael Rasmussen
On Tue, 29 Sep 2015 01:24:36 -0400 "Cesar Peschiera" wrote: > > My Question: > What will be the policy of PVE about of this? > This is not a critical bug it merely affects network performance. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael

[pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Cesar Peschiera
The error affect almost all Linux distros See the notice in this link: http://bitsup.blogspot.com/2015/09/thanks-google-tcp-team-for-open-source.html See the Google patch here: https://github.com/torvalds/linux/commit/30927520dbae297182990bb21d08762bcc35ce1d My Question: What will be the

Re: [pve-devel] [PATCH pve-ha-manager] delete node from HA stack when deleted from cluster

2015-09-28 Thread Dietmar Maurer
applied, thanks! On 09/28/2015 11:34 AM, Thomas Lamprecht wrote: When a node gets deleted from the cluster with pvecm delnode we set it's node state in the manager status to 'gone'. When set to gone the manager waits an hour after the node was last seen online and only then deletes it from the

[pve-devel] vma storage format

2015-09-28 Thread Andreas Steinel
Hi all, I had a look at the vma format description file in the qemu git and like to know if the block data stored in the file is 4K aligned or not. There is only stated that 'The extend header if followed by the actual cluster data, where we only store non-zero 4K blocks.' I assume it is not

[pve-devel] [PATCH] disable kvm_steal_time

2015-09-28 Thread Alexandre Derumier
It's currently buggy with live migration https://bugs.launchpad.net/qemu/+bug/1494350 Signed-off-by: Alexandre Derumier --- PVE/QemuServer.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 4906f2c..5ae5919 100644 ---

[pve-devel] [PATCH] pvesm list fix

2015-09-28 Thread Alen Grizonic
--- PVE/API2/Storage/Content.pm | 13 ++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/PVE/API2/Storage/Content.pm b/PVE/API2/Storage/Content.pm index a7e9fe3..63dc0a4 100644 --- a/PVE/API2/Storage/Content.pm +++ b/PVE/API2/Storage/Content.pm @@ -67,6 +67,8 @@

[pve-devel] [PATCH pve-manager] check for ext5 dir to avoid missing directory errors

2015-09-28 Thread Thomas Lamprecht
As we, for now, default to exclude ext5 from our build it's better to make an check if its directory exists, and only then allow to load from it. Else we can get errors on proxy startup, and when someone passes the ext5 parameter. Also make a indent/whitespace cleanup. Signed-off-by: Thomas

[pve-devel] [PATCH pve-ha-manager] handle node deletion in the HA stack

2015-09-28 Thread Thomas Lamprecht
When deleting a node from the cluster through pvecm delnode the dead node wasn't removed from the HAs manager status. Even if it has no real affect to function of the HA stack, especially if no services run there before the deletion - which should be the case. But for the user it is naturally

[pve-devel] [PATCH pve-ha-manager] delete node from HA stack when deleted from cluster

2015-09-28 Thread Thomas Lamprecht
When a node gets deleted from the cluster with pvecm delnode we set it's node state in the manager status to 'gone'. When set to gone the manager waits an hour after the node was last seen online and only then deletes it from the manager status. When some HA services were forgotten on the node

[pve-devel] [PATCH pve-zsync 08/11] parse_disks: the pool comes first in the path

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pve-zsync b/pve-zsync index 492a245..c1af115 100644 --- a/pve-zsync +++ b/pve-zsync @@ -711,7 +711,7 @@ sub parse_disks { if ($path =~ m/^\/dev\/zvol\/(\w+.*)(\/$disk)$/) { my @array =

[pve-devel] [PATCH pve-zsync 05/11] regex deduplication

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 15 +++ 1 file changed, 3 insertions(+), 12 deletions(-) diff --git a/pve-zsync b/pve-zsync index dfb383b..dfb3050 100644 --- a/pve-zsync +++ b/pve-zsync @@ -696,18 +696,9 @@ sub parse_disks { my $disk = undef; my $stor = undef; - if($line =~

[pve-devel] [PATCH pve-zsync 04/11] replace $is_disk with an early check

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pve-zsync b/pve-zsync index 33962a6..dfb383b 100644 --- a/pve-zsync +++ b/pve-zsync @@ -692,10 +692,10 @@ sub parse_disks { my $line = $1; next if $line =~ /cdrom|none/; + next if $line

[pve-devel] [PATCH pve-zsync 01/11] typo fix: exsits -> exists

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pve-zsync b/pve-zsync index c23bc4b..216057b 100644 --- a/pve-zsync +++ b/pve-zsync @@ -77,7 +77,7 @@ sub get_status { return undef; } -sub check_pool_exsits { +sub check_pool_exists { my

[pve-devel] [PATCH pve-zsync 03/11] check for 'cdrom/none' storage early

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/pve-zsync b/pve-zsync index 9de01d2..33962a6 100644 --- a/pve-zsync +++ b/pve-zsync @@ -690,6 +690,9 @@ sub parse_disks { my $num = 0; while ($text && $text =~ s/^(.*?)(\n|$)//) { my $line =

[pve-devel] [PATCH pve-zsync 06/11] remove now unnecessary if($disk)

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 42 -- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/pve-zsync b/pve-zsync index dfb3050..d784d19 100644 --- a/pve-zsync +++ b/pve-zsync @@ -703,29 +703,27 @@ sub parse_disks { die "disk is not on ZFS

[pve-devel] [PATCH pve-zsync 07/11] parse_disks: don't drop the path inside the pool

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pve-zsync b/pve-zsync index d784d19..492a245 100644 --- a/pve-zsync +++ b/pve-zsync @@ -708,7 +708,7 @@ sub parse_disks { $cmd .= "pvesm path $stor$disk"; my $path = run_cmd($cmd); - if

[pve-devel] [PATCH pve-zsync 02/11] Avoid 'no such file' error when no state exists.

2015-09-28 Thread Wolfgang Bumiller
--- pve-zsync | 2 ++ 1 file changed, 2 insertions(+) diff --git a/pve-zsync b/pve-zsync index 216057b..9de01d2 100644 --- a/pve-zsync +++ b/pve-zsync @@ -6,6 +6,7 @@ use Data::Dumper qw(Dumper); use Fcntl qw(:flock SEEK_END); use Getopt::Long qw(GetOptionsFromArray); use File::Copy qw(move);

[pve-devel] [PATCH pve-zsync 00/11] fixes, hostnames/ipv6, subvolumes

2015-09-28 Thread Wolfgang Bumiller
This series fixes a few issues in pve-zsync and adds hostname and ipv6 support. It now also correctly parses disks on pve zfs subvolume storage. Note that intermediate paths on target subvolumes are not created. Should we do this or leave it up to the user? Wolfgang Bumiller (11): typo fix:

[pve-devel] [PATCH pve-zsync 11/11] use arrays for run_cmd and argument separators

2015-09-28 Thread Wolfgang Bumiller
Using the array version of run_cmd to avoid quoting issues. Added '--' argument separators where applicable for correctness. --- pve-zsync | 99 +++ 1 file changed, 49 insertions(+), 50 deletions(-) diff --git a/pve-zsync b/pve-zsync