Re: [pve-devel] [PATCH] disable kvm_steal_time

2015-09-28 Thread Dietmar Maurer

applied.

On 09/28/2015 09:56 AM, Alexandre Derumier wrote:

It's currently buggy with live migration

https://bugs.launchpad.net/qemu/+bug/1494350
Signed-off-by: Alexandre Derumier 
---
  PVE/QemuServer.pm | 1 +
  1 file changed, 1 insertion(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4906f2c..5ae5919 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2890,6 +2890,7 @@ sub config_to_command {
  
  	push @$cpuFlags , '+kvm_pv_unhalt' if !$nokvm;

push @$cpuFlags , '+kvm_pv_eoi' if !$nokvm;
+push @$cpuFlags , '-kvm_steal_time' if !$nokvm;
  }
  
  push @$cpuFlags, 'enforce' if $cpu ne 'host' && !$nokvm;



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Cesar Peschiera

This is not a critical bug it merely affects network performance.


Ok, but i guess that if we have several VMs into a Server, the problem be 
multiplied



___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Qemu-img thin provision

2015-09-28 Thread Dietmar Maurer
> The way to solve it once and for all in a way that works with all MUA's
> is this:
> 
> Old-Reply-To: original sender 
> Reply-To: pve-devel@pve.proxmox.com 
> Precedence: list
> List-Post: 


Sigh - seems 'Reply all' does no longer works with 'Thunderbird' that way.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Michael Rasmussen
On Tue, 29 Sep 2015 01:24:36 -0400
"Cesar Peschiera"  wrote:

> 
> My Question:
> What will be the policy of PVE about of this?
> 
This is not a critical bug it merely affects network performance.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Football builds self-discipline.  What else would induce a spectator to
sit out in the open in subfreezing weather?


pgpx1eB_z9To3.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Google patch - fix a critical fail in Linux kernel for TCP protocol

2015-09-28 Thread Cesar Peschiera

The error affect almost all Linux distros

See the notice in this link:
http://bitsup.blogspot.com/2015/09/thanks-google-tcp-team-for-open-source.html

See the Google patch here:
https://github.com/torvalds/linux/commit/30927520dbae297182990bb21d08762bcc35ce1d

My Question:
What will be the policy of PVE about of this?

Best regards
Cesar

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-ha-manager] delete node from HA stack when deleted from cluster

2015-09-28 Thread Dietmar Maurer

applied, thanks!

On 09/28/2015 11:34 AM, Thomas Lamprecht wrote:

When a node gets deleted from the cluster with pvecm delnode
we set it's node state in the manager status to 'gone'.
When set to gone the manager waits an hour after the node was last
seen online and only then deletes it from the manager status.

When some HA services were forgotten on the node (shouldn't happen
at all!!) the node will be fenced, the service migrated and then its
state reset to 'gone'. After an hour the node will be deleted,
unless it joined the cluster again in the meantime.

Deleting a node from the HA manager status is by no means a final
act, the ha-manager could live without deleting it, but for the user
it is confusing to see dead nodes in the interface.




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] vma storage format

2015-09-28 Thread Andreas Steinel
Hi all,

I had a look at the vma format description file in the qemu git and
like to know if the block data stored in the file is 4K aligned or
not. There is only stated that 'The extend header if followed by the
actual cluster data, where we only store non-zero 4K blocks.'

I assume it is not aligned, yet it would be great if it were so.
Deduplication of uncompressed backup files would be greatly improved
by this.

I tried to find the "real" vma code for this to get a glimpse how this
data is really stored in the vma file, yet I didn't find it in the git
repository. I only found an embedded src.tar.gz and none of the
patches in debian/patches implied the whole vma content, only patches
for vma.

I cannot say for sure, but would it be possible to pad the data in
some way to get 4K block aligned backup data without destroying
backward compatibility? I'd really want to add the requirement of 4K
block aligned storage in backup.txt which would really be another big
advantage of the vma backup mechanism.

Best regards,
Andreas Steinel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] disable kvm_steal_time

2015-09-28 Thread Alexandre Derumier
It's currently buggy with live migration

https://bugs.launchpad.net/qemu/+bug/1494350
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4906f2c..5ae5919 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2890,6 +2890,7 @@ sub config_to_command {
 
push @$cpuFlags , '+kvm_pv_unhalt' if !$nokvm;
push @$cpuFlags , '+kvm_pv_eoi' if !$nokvm;
+push @$cpuFlags , '-kvm_steal_time' if !$nokvm;
 }
 
 push @$cpuFlags, 'enforce' if $cpu ne 'host' && !$nokvm;
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] pvesm list fix

2015-09-28 Thread Alen Grizonic
---
 PVE/API2/Storage/Content.pm | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Storage/Content.pm b/PVE/API2/Storage/Content.pm
index a7e9fe3..63dc0a4 100644
--- a/PVE/API2/Storage/Content.pm
+++ b/PVE/API2/Storage/Content.pm
@@ -67,6 +67,8 @@ __PACKAGE__->register_method ({
 
my $storeid = $param->{storage};
 
+   my $vmid = $param->{vmid};
+
my $cfg = cfs_read_file("storage.cfg");
 
my $scfg = PVE::Storage::storage_config($cfg, $storeid);
@@ -74,14 +76,19 @@ __PACKAGE__->register_method ({
my $res = [];
foreach my $ct (@$cts) {
my $data;
-   if ($ct eq 'images' || defined($param->{vmid})) {
+   if ($ct eq 'images') {
$data = PVE::Storage::vdisk_list ($cfg, $storeid, 
$param->{vmid});
-   } elsif ($ct eq 'iso') {
+   } elsif ($ct eq 'iso' && !defined($param->{vmid})) {
$data = PVE::Storage::template_list ($cfg, $storeid, 'iso');
-   } elsif ($ct eq 'vztmpl') {
+   } elsif ($ct eq 'vztmpl'&& !defined($param->{vmid})) {
$data = PVE::Storage::template_list ($cfg, $storeid, 'vztmpl');
} elsif ($ct eq 'backup') {
$data = PVE::Storage::template_list ($cfg, $storeid, 'backup');
+   foreach my $item (@{$data->{$storeid}}) {
+   if (defined($vmid)) {
+   @{$data->{$storeid}} = grep { $_->{volid} =~ 
m/\S+-$vmid-\S+/ } @{$data->{$storeid}};
+   }
+   }
}
 
next if !$data || !$data->{$storeid};
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager] check for ext5 dir to avoid missing directory errors

2015-09-28 Thread Thomas Lamprecht
As we, for now, default to exclude ext5 from our build it's better
to make an check if its directory exists, and only then allow to
load from it. Else we can get errors on proxy startup, and when
someone passes the ext5 parameter.

Also make a indent/whitespace cleanup.

Signed-off-by: Thomas Lamprecht 
---
 PVE/Service/pveproxy.pm | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/PVE/Service/pveproxy.pm b/PVE/Service/pveproxy.pm
index d9ae9b8..16edad1 100755
--- a/PVE/Service/pveproxy.pm
+++ b/PVE/Service/pveproxy.pm
@@ -40,6 +40,8 @@ my %daemon_options = (
 
 my $daemon = __PACKAGE__->new('pveproxy', $cmdline, %daemon_options);
 
+my $ext5_dir_exists;
+
 sub add_dirs {
 my ($result_hash, $alias, $subdir) = @_;
 
@@ -70,13 +72,19 @@ sub init {
 my $family = PVE::Tools::get_host_address_family($self->{nodename});
 my $socket = $self->create_reusable_socket(8006, undef, $family);
 
+$ext5_dir_exists = (-d '/usr/share/pve-manager/ext5');
+
 my $dirs = {};
 
 add_dirs($dirs, '/pve2/locale/', '/usr/share/pve-manager/locale/');
 add_dirs($dirs, '/pve2/touch/', '/usr/share/pve-manager/touch/');
 add_dirs($dirs, '/pve2/ext4/', '/usr/share/pve-manager/ext4/');
-add_dirs($dirs, '/pve2/ext5/', '/usr/share/pve-manager/ext5/');
-add_dirs($dirs, '/pve2/manager5/', '/usr/share/pve-manager/manager5/');
+
+if ($ext5_dir_exists) { # only add ext5 dirs if it was build
+   add_dirs($dirs, '/pve2/ext5/', '/usr/share/pve-manager/ext5/');
+   add_dirs($dirs, '/pve2/manager5/', '/usr/share/pve-manager/manager5/');
+}
+
 add_dirs($dirs, '/pve2/images/' => '/usr/share/pve-manager/images/');
 add_dirs($dirs, '/pve2/css/' => '/usr/share/pve-manager/css/');
 add_dirs($dirs, '/pve2/js/' => '/usr/share/pve-manager/js/');
@@ -183,22 +191,22 @@ sub get_index {
$mobile = $args->{mobile} ? 1 : 0;
 }
 
-   my $ext5;
-   if (defined($args->{ext5})) {
+my $ext5;
+if (defined($args->{ext5})) {
$ext5 = $args->{ext5} ? 1 : 0;
 }
 
 my $page;
 
 if (defined($args->{console}) && $args->{novnc}) {
-   $page = PVE::NoVncIndex::get_index($lang, $username, $token, 
$args->{console});
+   $page = PVE::NoVncIndex::get_index($lang, $username, $token, 
$args->{console});
 } elsif ($mobile) {
-   $page = PVE::TouchIndex::get_index($lang, $username, $token, 
$args->{console});
-   } elsif ($ext5) {
-   $page = PVE::ExtJSIndex5::get_index($lang, $username, $token, 
$args->{console});
-   } else {
-   $page = PVE::ExtJSIndex::get_index($lang, $username, $token, 
$args->{console});
-   }
+   $page = PVE::TouchIndex::get_index($lang, $username, $token, 
$args->{console});
+} elsif ($ext5 && $ext5_dir_exists) {
+   $page = PVE::ExtJSIndex5::get_index($lang, $username, $token, 
$args->{console});
+} else {
+   $page = PVE::ExtJSIndex::get_index($lang, $username, $token, 
$args->{console});
+}
 my $headers = HTTP::Headers->new(Content_Type => "text/html; 
charset=utf-8");
 my $resp = HTTP::Response->new(200, "OK", $headers, $page);
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-ha-manager] handle node deletion in the HA stack

2015-09-28 Thread Thomas Lamprecht
When deleting a node from the cluster through pvecm delnode the dead node 
wasn't removed from the HAs manager status.
Even if it has no real affect to function of the HA stack, especially if no 
services run there before the deletion - which should be the case.
But for the user it is naturally confusing to see them in the interface.

This patch proposes an automated removal process, after an hour it vanished 
from the cluster member list it will get deleted.

An alternate approach would be an manual command through the ha-manager's 
binary.

This patch doesn't covers some side effects like the deletion of the node from 
defined groups.

Commit message has also some more details.

Thomas Lamprecht (1):
  delete node from HA stack when deleted from cluster

 src/PVE/HA/NodeStatus.pm | 26 --
 1 file changed, 24 insertions(+), 2 deletions(-)

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-ha-manager] delete node from HA stack when deleted from cluster

2015-09-28 Thread Thomas Lamprecht
When a node gets deleted from the cluster with pvecm delnode
we set it's node state in the manager status to 'gone'.
When set to gone the manager waits an hour after the node was last
seen online and only then deletes it from the manager status.

When some HA services were forgotten on the node (shouldn't happen
at all!!) the node will be fenced, the service migrated and then its
state reset to 'gone'. After an hour the node will be deleted,
unless it joined the cluster again in the meantime.

Deleting a node from the HA manager status is by no means a final
act, the ha-manager could live without deleting it, but for the user
it is confusing to see dead nodes in the interface.

Signed-off-by: Thomas Lamprecht 
---
 src/PVE/HA/NodeStatus.pm | 26 --
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/src/PVE/HA/NodeStatus.pm b/src/PVE/HA/NodeStatus.pm
index fe8c0ef..eb174cb 100644
--- a/src/PVE/HA/NodeStatus.pm
+++ b/src/PVE/HA/NodeStatus.pm
@@ -24,6 +24,7 @@ my $valid_node_states = {
 online => "node online and member of quorate partition",
 unknown => "not member of quorate partition, but possibly still running",
 fence => "node needs to be fenced",
+gone => "node vanished from cluster members list, possibly deleted"
 };
 
 sub get_node_state {
@@ -79,6 +80,20 @@ sub list_online_nodes {
 return $res;
 }
 
+my $delete_node = sub {
+my ($self, $node) = @_;
+
+return undef if $self->get_node_state($node) ne 'gone';
+
+my $haenv = $self->{haenv};
+
+delete $self->{last_online}->{$node};
+delete $self->{status}->{$node};
+
+$haenv->log('notice', "deleting gone node '$node', not a cluster member".
+   " anymore.");
+};
+
 my $set_node_state = sub {
 my ($self, $node, $state) = @_;
 
@@ -113,7 +128,7 @@ sub update {
 
if ($state eq 'online') {
# &$set_node_state($self, $node, 'online');
-   } elsif ($state eq 'unknown') {
+   } elsif ($state eq 'unknown' || $state eq 'gone') {
&$set_node_state($self, $node, 'online');
} elsif ($state eq 'fence') {
# do nothing, wait until fenced
@@ -133,9 +148,16 @@ sub update {
if ($state eq 'online') {
&$set_node_state($self, $node, 'unknown');
} elsif ($state eq 'unknown') {
-   # &$set_node_state($self, $node, 'unknown');
+
+   # node isn't in the member list anymore, deleted from the cluster?
+   &$set_node_state($self, $node, 'gone') if(!defined($d));
+
} elsif ($state eq 'fence') {
# do nothing, wait until fenced
+   } elsif($state eq 'gone') {
+   if($self->node_is_offline_delayed($node, 3600)) {
+   &$delete_node($self, $node);
+   }
} else {
die "detected unknown node state '$state";
}
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 08/11] parse_disks: the pool comes first in the path

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pve-zsync b/pve-zsync
index 492a245..c1af115 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -711,7 +711,7 @@ sub parse_disks {
if ($path =~ m/^\/dev\/zvol\/(\w+.*)(\/$disk)$/) {
 
my @array = split('/', $1);
-   $disks->{$num}->{pool} = pop(@array);
+   $disks->{$num}->{pool} = shift(@array);
$disks->{$num}->{all} = $disks->{$num}->{pool};
if (0 < @array) {
$disks->{$num}->{path} = join('/', @array);
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 05/11] regex deduplication

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 15 +++
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/pve-zsync b/pve-zsync
index dfb383b..dfb3050 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -696,18 +696,9 @@ sub parse_disks {
 
my $disk = undef;
my $stor = undef;
-   if($line =~ m/^(virtio\d+: )(.+:)([A-Za-z0-9\-]+),(.*)$/) {
-   $disk = $3;
-   $stor = $2;
-   } elsif($line =~ m/^(ide\d+: )(.+:)([A-Za-z0-9\-]+),(.*)$/) {
-   $disk = $3;
-   $stor = $2;
-   } elsif($line =~ m/^(scsi\d+: )(.+:)([A-Za-z0-9\-]+),(.*)$/) {
-   $disk = $3;
-   $stor = $2;
-   } elsif($line =~ m/^(sata\d+: )(.+:)([A-Za-z0-9\-]+),(.*)$/) {
-   $disk = $3;
-   $stor = $2;
+   if($line =~ m/^(?:virtio|ide|scsi|sata)\d+: 
(.+:)([A-Za-z0-9\-]+),(.*)$/) {
+   $disk = $2;
+   $stor = $1;
} else {
die "disk is not on ZFS Storage\n";
}
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 04/11] replace $is_disk with an early check

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pve-zsync b/pve-zsync
index 33962a6..dfb383b 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -692,10 +692,10 @@ sub parse_disks {
my $line = $1;
 
next if $line =~ /cdrom|none/;
+   next if $line !~ m/^(?:virtio|ide|scsi|sata)\d+: /;
 
my $disk = undef;
my $stor = undef;
-   my $is_disk = $line =~ m/^(virtio|ide|scsi|sata){1}\d+: /;
if($line =~ m/^(virtio\d+: )(.+:)([A-Za-z0-9\-]+),(.*)$/) {
$disk = $3;
$stor = $2;
@@ -708,10 +708,10 @@ sub parse_disks {
} elsif($line =~ m/^(sata\d+: )(.+:)([A-Za-z0-9\-]+),(.*)$/) {
$disk = $3;
$stor = $2;
+   } else {
+   die "disk is not on ZFS Storage\n";
}
 
-   die "disk is not on ZFS Storage\n" if $is_disk && !$disk;
-
if($disk) {
my $cmd = "";
$cmd .= "ssh root\@$ip " if $ip;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 01/11] typo fix: exsits -> exists

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pve-zsync b/pve-zsync
index c23bc4b..216057b 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -77,7 +77,7 @@ sub get_status {
 return undef;
 }
 
-sub check_pool_exsits {
+sub check_pool_exists {
 my ($target) = @_;
 
 my $cmd = '';
@@ -436,9 +436,9 @@ sub init {
run_cmd("ssh-copy-id -i /root/.ssh/id_rsa.pub root\@$ip");
 }
 
-die "Pool $dest->{all} does not exists\n" if check_pool_exsits($dest);
+die "Pool $dest->{all} does not exists\n" if check_pool_exists($dest);
 
-my $check = check_pool_exsits($source->{path}, $source->{ip}) if 
!$source->{vmid} && $source->{path};
+my $check = check_pool_exists($source->{path}, $source->{ip}) if 
!$source->{vmid} && $source->{path};
 
 die "Pool $source->{path} does not exists\n" if undef($check);
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 03/11] check for 'cdrom/none' storage early

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/pve-zsync b/pve-zsync
index 9de01d2..33962a6 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -690,6 +690,9 @@ sub parse_disks {
 my $num = 0;
 while ($text && $text =~ s/^(.*?)(\n|$)//) {
my $line = $1;
+
+   next if $line =~ /cdrom|none/;
+
my $disk = undef;
my $stor = undef;
my $is_disk = $line =~ m/^(virtio|ide|scsi|sata){1}\d+: /;
@@ -707,9 +710,9 @@ sub parse_disks {
$stor = $2;
}
 
-   die "disk is not on ZFS Storage\n" if $is_disk && !$disk && $line !~ 
m/cdrom/;
+   die "disk is not on ZFS Storage\n" if $is_disk && !$disk;
 
-   if($disk && $line !~ m/none/ && $line !~ m/cdrom/ ) {
+   if($disk) {
my $cmd = "";
$cmd .= "ssh root\@$ip " if $ip;
$cmd .= "pvesm path $stor$disk";
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 06/11] remove now unnecessary if($disk)

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 42 --
 1 file changed, 20 insertions(+), 22 deletions(-)

diff --git a/pve-zsync b/pve-zsync
index dfb3050..d784d19 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -703,29 +703,27 @@ sub parse_disks {
die "disk is not on ZFS Storage\n";
}
 
-   if($disk) {
-   my $cmd = "";
-   $cmd .= "ssh root\@$ip " if $ip;
-   $cmd .= "pvesm path $stor$disk";
-   my $path = run_cmd($cmd);
-
-   if ($path =~ m/^\/dev\/zvol\/(\w+).*(\/$disk)$/) {
-
-   my @array = split('/', $1);
-   $disks->{$num}->{pool} = pop(@array);
-   $disks->{$num}->{all} = $disks->{$num}->{pool};
-   if (0 < @array) {
-   $disks->{$num}->{path} = join('/', @array);
-   $disks->{$num}->{all} .= "\/$disks->{$num}->{path}";
-   }
-   $disks->{$num}->{last_part} = $disk;
-   $disks->{$num}->{all} .= "\/$disk";
-
-   $num++;
-
-   } else {
-   die "ERROR: in path\n";
+   my $cmd = "";
+   $cmd .= "ssh root\@$ip " if $ip;
+   $cmd .= "pvesm path $stor$disk";
+   my $path = run_cmd($cmd);
+
+   if ($path =~ m/^\/dev\/zvol\/(\w+).*(\/$disk)$/) {
+
+   my @array = split('/', $1);
+   $disks->{$num}->{pool} = pop(@array);
+   $disks->{$num}->{all} = $disks->{$num}->{pool};
+   if (0 < @array) {
+   $disks->{$num}->{path} = join('/', @array);
+   $disks->{$num}->{all} .= "\/$disks->{$num}->{path}";
}
+   $disks->{$num}->{last_part} = $disk;
+   $disks->{$num}->{all} .= "\/$disk";
+
+   $num++;
+
+   } else {
+   die "ERROR: in path\n";
}
 }
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 07/11] parse_disks: don't drop the path inside the pool

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pve-zsync b/pve-zsync
index d784d19..492a245 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -708,7 +708,7 @@ sub parse_disks {
$cmd .= "pvesm path $stor$disk";
my $path = run_cmd($cmd);
 
-   if ($path =~ m/^\/dev\/zvol\/(\w+).*(\/$disk)$/) {
+   if ($path =~ m/^\/dev\/zvol\/(\w+.*)(\/$disk)$/) {
 
my @array = split('/', $1);
$disks->{$num}->{pool} = pop(@array);
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 02/11] Avoid 'no such file' error when no state exists.

2015-09-28 Thread Wolfgang Bumiller
---
 pve-zsync | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/pve-zsync b/pve-zsync
index 216057b..9de01d2 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -6,6 +6,7 @@ use Data::Dumper qw(Dumper);
 use Fcntl qw(:flock SEEK_END);
 use Getopt::Long qw(GetOptionsFromArray);
 use File::Copy qw(move);
+use File::Path qw(make_path);
 use Switch;
 use JSON;
 use IO::File;
@@ -244,6 +245,7 @@ sub param_to_job {
 sub read_state {
 
 if (!-e $STATE) {
+   make_path $CONFIG_PATH;
my $new_fh = IO::File->new("> $STATE");
die "Could not create $STATE: $!\n" if !$new_fh;
print $new_fh "{}";
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 00/11] fixes, hostnames/ipv6, subvolumes

2015-09-28 Thread Wolfgang Bumiller
This series fixes a few issues in pve-zsync and adds hostname and
ipv6 support.
It now also correctly parses disks on pve zfs subvolume storage.
Note that intermediate paths on target subvolumes are not created.
Should we do this or leave it up to the user?

Wolfgang Bumiller (11):
  typo fix: exsits -> exists
  Avoid 'no such file' error when no state exists.
  check for 'cdrom/none' storage early
  replace $is_disk with an early check
  regex deduplication
  remove now unnecessary if($disk)
  parse_disks: don't drop the path inside the pool
  parse_disks: the pool comes first in the path
  run_cmd: array support
  parse_target/check_target: support ipv6 and hostnames
  use arrays for run_cmd and argument separators

 pve-zsync | 218 +-
 1 file changed, 115 insertions(+), 103 deletions(-)

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-zsync 11/11] use arrays for run_cmd and argument separators

2015-09-28 Thread Wolfgang Bumiller
Using the array version of run_cmd to avoid quoting issues.
Added '--' argument separators where applicable for
correctness.
---
 pve-zsync | 99 +++
 1 file changed, 49 insertions(+), 50 deletions(-)

diff --git a/pve-zsync b/pve-zsync
index 12c073d..4941456 100644
--- a/pve-zsync
+++ b/pve-zsync
@@ -104,9 +104,9 @@ sub get_status {
 sub check_pool_exists {
 my ($target) = @_;
 
-my $cmd = '';
-$cmd = "ssh root\@$target->{ip} " if $target->{ip};
-$cmd .= "zfs list $target->{all} -H";
+my $cmd = [];
+push @$cmd, 'ssh', "root\@$target->{ip}", '--', if $target->{ip};
+push @$cmd, 'zfs', 'list', '-H', '--', $target->{all};
 eval {
run_cmd($cmd);
 };
@@ -430,9 +430,9 @@ sub list {
 sub vm_exists {
 my ($target) = @_;
 
-my $cmd = "";
-$cmd = "ssh root\@$target->{ip} " if ($target->{ip});
-$cmd .= "qm status $target->{vmid}";
+my $cmd = [];
+push @$cmd, 'ssh', "root\@$target->{ip}", '--', if $target->{ip};
+push @$cmd, 'qm', 'status', $target->{vmid};
 
 my $res = run_cmd($cmd);
 
@@ -454,11 +454,11 @@ sub init {
 my $dest = parse_target($param->{dest});
 
 if (my $ip =  $dest->{ip}) {
-   run_cmd("ssh-copy-id -i /root/.ssh/id_rsa.pub root\@$ip");
+   run_cmd(['ssh-copy-id', '-i', '/root/.ssh/id_rsa.pub', "root\@$ip"]);
 }
 
 if (my $ip =  $source->{ip}) {
-   run_cmd("ssh-copy-id -i /root/.ssh/id_rsa.pub root\@$ip");
+   run_cmd(['ssh-copy-id', '-i', '/root/.ssh/id_rsa.pub', "root\@$ip"]);
 }
 
 die "Pool $dest->{all} does not exists\n" if check_pool_exists($dest);
@@ -590,10 +590,10 @@ sub sync {
 sub snapshot_get{
 my ($source, $dest, $max_snap, $name) = @_;
 
-my $cmd = "zfs list -r -t snapshot -Ho name, -S creation ";
-
-$cmd .= $source->{all};
-$cmd = "ssh root\@$source->{ip} ".$cmd if $source->{ip};
+my $cmd = [];
+push @$cmd, 'ssh', "root\@$source->{ip}", '--', if $source->{ip};
+push @$cmd, 'zfs', 'list', '-r', '-t', 'snapshot', '-Ho', 'name', '-S', 
'creation';
+push @$cmd, $source->{all};
 
 my $raw = run_cmd($cmd);
 my $index = 0;
@@ -628,9 +628,9 @@ sub snapshot_add {
 
 my $path = "$source->{all}\@$snap_name";
 
-my $cmd = "zfs snapshot $path";
-$cmd = "ssh root\@$source->{ip}  $cmd" if $source->{ip};
-
+my $cmd = [];
+push @$cmd, 'ssh', "root\@$source->{ip}", '--', if $source->{ip};
+push @$cmd, 'zfs', 'snapshot', $path;
 eval{
run_cmd($cmd);
 };
@@ -680,9 +680,9 @@ sub write_cron {
 sub get_disks {
 my ($target) = @_;
 
-my $cmd = "";
-$cmd = "ssh root\@$target->{ip} " if $target->{ip};
-$cmd .= "qm config $target->{vmid}";
+my $cmd = [];
+push @$cmd, 'ssh', "root\@$target->{ip}", '--', if $target->{ip};
+push @$cmd, 'qm', 'config', $target->{vmid};
 
 my $res = run_cmd($cmd);
 
@@ -729,9 +729,9 @@ sub parse_disks {
die "disk is not on ZFS Storage\n";
}
 
-   my $cmd = "";
-   $cmd .= "ssh root\@$ip " if $ip;
-   $cmd .= "pvesm path $stor$disk";
+   my $cmd = [];
+   push @$cmd, 'ssh', "root\@$ip", '--' if $ip;
+   push @$cmd, 'pvesm', 'path', "$stor$disk";
my $path = run_cmd($cmd);
 
if ($path =~ m/^\/dev\/zvol\/(\w+.*)(\/$disk)$/) {
@@ -759,26 +759,26 @@ sub parse_disks {
 sub snapshot_destroy {
 my ($source, $dest, $method, $snap) = @_;
 
-my $zfscmd = "zfs destroy ";
+my @zfscmd = ('zfs', 'destroy');
 my $snapshot = "$source->{all}\@$snap";
 
 eval {
if($source->{ip} && $method eq 'ssh'){
-   run_cmd("ssh root\@$source->{ip} $zfscmd $snapshot");
+   run_cmd(['ssh', "root\@$source->{ip}", '--', @zfscmd, $snapshot]);
} else {
-   run_cmd("$zfscmd $snapshot");
+   run_cmd([@zfscmd, $snapshot]);
}
 };
 if (my $erro = $@) {
warn "WARN: $erro";
 }
 if ($dest) {
-   my $ssh =  $dest->{ip} ? "ssh root\@$dest->{ip}" : "";
+   my @ssh = $dest->{ip} ? ('ssh', "root\@$dest->{ip}", '--') : ();
 
my $path = "$dest->{all}\/$source->{last_part}";
 
eval {
-   run_cmd("$ssh $zfscmd $path\@$snap ");
+   run_cmd([@ssh, @zfscmd, "$path\@$snap"]);
};
if (my $erro = $@) {
warn "WARN: $erro";
@@ -789,10 +789,10 @@ sub snapshot_destroy {
 sub snapshot_exist {
 my ($source ,$dest, $method) = @_;
 
-my $cmd = "";
-$cmd = "ssh root\@$dest->{ip} " if $dest->{ip};
-$cmd .= "zfs list -rt snapshot -Ho name $dest->{all}";
-$cmd .= "\/$source->{last_part}\@$source->{old_snap}";
+my $cmd = [];
+push @$cmd, 'ssh', "root\@$dest->{ip}", '--' if $dest->{ip};
+push @$cmd, 'zfs', 'list', '-rt', 'snapshot', '-Ho', 'name';
+push @$cmd, "$dest->{all}/$source->{last_part}\@$source->{old_snap}";
 
 my $text = "";
 eval {$text =run_cmd($cmd);};
@@ -810,26 +810,25 @@ sub snapshot_exist {