Re: [pve-devel] [PATCH container] correctly set unlimited cpulimit at runtime

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] lxc: correctly display cpulimit

2016-09-15 Thread Dietmar Maurer
Applied

> --- a/www/manager6/lxc/ResourceEdit.js
> +++ b/www/manager6/lxc/ResourceEdit.js
> @@ -131,6 +131,7 @@ Ext.define('PVE.lxc.CPUInputPanel', {
>   me.column1 = items;
>   } else {
>   me.items = items;
> + me.items[0].value = 0;

this is not really nice - I do the following instead:


-   value: '1',
+   value: me.insideWizard ? 1 : 0,

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] lxc: correctly display cpulimit

2016-09-15 Thread Fabian Grünbichler
the backend defaults to 0 (unlimited), so this should also be the default
in the GUI if nothing is set.
---
 www/manager6/lxc/ResourceEdit.js | 1 +
 www/manager6/lxc/Resources.js| 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/www/manager6/lxc/ResourceEdit.js b/www/manager6/lxc/ResourceEdit.js
index 498e789..ff49e2f 100644
--- a/www/manager6/lxc/ResourceEdit.js
+++ b/www/manager6/lxc/ResourceEdit.js
@@ -131,6 +131,7 @@ Ext.define('PVE.lxc.CPUInputPanel', {
me.column1 = items;
} else {
me.items = items;
+   me.items[0].value = 0;
}

me.callParent();
diff --git a/www/manager6/lxc/Resources.js b/www/manager6/lxc/Resources.js
index 355b903..70dfba5 100644
--- a/www/manager6/lxc/Resources.js
+++ b/www/manager6/lxc/Resources.js
@@ -62,10 +62,10 @@ Ext.define('PVE.lxc.RessourceView', {
header: gettext('CPU limit'),
never_delete: true,
editor: caps.vms['VM.Config.CPU'] ? 'PVE.lxc.CPUEdit' : 
undefined,
-   defaultValue: 1,
+   defaultValue: 0,
tdCls: 'pve-itype-icon-processor',
renderer: function(value) {
-   if (value) { return value; }
+   if (value > 0) { return value; }
return gettext('unlimited');
}
},
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] correctly set unlimited cpulimit at runtime

2016-09-15 Thread Fabian Grünbichler
'-1' means no limit for this cgroup value, so use this like we already do when
deleting the cpulimit.
---
 src/PVE/LXC/Config.pm | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 7480fff..2ec643e 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -883,7 +883,11 @@ sub update_pct_config {
my $list = PVE::LXC::verify_searchdomain_list($value);
$conf->{$opt} = $list;
} elsif ($opt eq 'cpulimit') {
-   PVE::LXC::write_cgroup_value("cpu", $vmid, "cpu.cfs_quota_us", 
int(10*$value));
+   if ($value == 0) {
+   PVE::LXC::write_cgroup_value("cpu", $vmid, "cpu.cfs_quota_us", 
-1);
+   } else {
+   PVE::LXC::write_cgroup_value("cpu", $vmid, "cpu.cfs_quota_us", 
int(10*$value));
+   }
$conf->{$opt} = $value;
} elsif ($opt eq 'cpuunits') {
$conf->{$opt} = $value;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] HA: add the managers' FSM diagram

2016-09-15 Thread Dietmar Maurer
the image is too big...

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] HA: add the managers' FSM diagram

2016-09-15 Thread Thomas Lamprecht
Signed-off-by: Thomas Lamprecht 
---

for now as SVG as the graphviz filter from asciidoc isn't that
obedient when asked to render the FSM as vector graphic.. :(

 ha-manager.adoc   |   6 ++
 images/ha-manager-fsm.svg | 166 ++
 2 files changed, 172 insertions(+)
 create mode 100644 images/ha-manager-fsm.svg

diff --git a/ha-manager.adoc b/ha-manager.adoc
index 5db5b05..9e8dc06 100644
--- a/ha-manager.adoc
+++ b/ha-manager.adoc
@@ -161,6 +161,12 @@ The cluster resource manager (CRM), it controls the 
cluster wide
 actions of the services, processes the LRM results and includes the state
 machine which controls the state of each service.
 
+ifndef::manvolnum[]
+
+image::images/ha-manager-fsm.svg[title="Managers' Finite State Machine: blue 
are manual and black are automatic triggered transitions",align="center"]
+
+endif::manvolnum[]
+
 .Locks in the LRM & CRM
 [NOTE]
 Locks are provided by our distributed configuration file system (pmxcfs).
diff --git a/images/ha-manager-fsm.svg b/images/ha-manager-fsm.svg
new file mode 100644
index 000..bc39c14
--- /dev/null
+++ b/images/ha-manager-fsm.svg
@@ -0,0 +1,166 @@
+
+http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd;>
+
+
+http://www.w3.org/2000/svg; 
xmlns:xlink="http://www.w3.org/1999/xlink;>
+
+ha_manager_fsm
+
+
+error
+
+
+error
+
+
+stopped
+
+
+stopped
+
+
+errorstopped
+
+
+disable service
+
+
+stoppederror
+
+
+service error
+
+
+stoppedstopped
+
+
+service runs
+
+
+started
+
+
+started
+
+
+stoppedstarted
+
+
+enable service
+
+
+migrate
+
+migrate
+
+
+stoppedmigrate
+
+
+migrate service
+
+
+fence
+
+fence
+
+
+stoppedfence
+
+
+node failed
+
+
+startederror
+
+
+service error
+
+
+startedstarted
+
+
+service stopped
+
+
+startedmigrate
+
+
+migrate service
+
+
+startedfence
+
+
+node failed
+
+
+req_stop
+
+req_stop
+
+
+startedreq_stop
+
+
+disable service
+
+
+migrateerror
+
+
+service error
+
+
+migratestopped
+
+
+service migrated
+
+
+migratestarted
+
+
+service migrated
+
+
+migratefence
+
+
+node failed
+
+
+fenceerror
+
+
+service not recoverable
+
+
+fencestarted
+
+
+service recovered
+
+
+req_stoperror
+
+
+service error
+
+
+req_stopstopped
+
+
+service stopped
+
+
+req_stopfence
+
+
+node failed
+
+
+
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] forbid migration of template with local base image

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] restore: better error handling for vdisk deletion

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] forbid restore into existing template

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 storage 4/5] add comments about LVM thin clones

2016-09-15 Thread Dietmar Maurer
this one does not apply - please can you rebase/resend?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 storage 5/5] fix typo

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 storage 3/5] harmonize list_images code

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 storage 2/5] move check for existing clones into own method

2016-09-15 Thread Dietmar Maurer
applied, but changed name to $volume_is_base_and_used__no_lock

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 storage 1/5] remove unused method

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] change overflowhandler of actionmenu to scroller

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qmp_snapshot_drive : add aiocontext

2016-09-15 Thread Dietmar Maurer
applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] forbid migration of template with local base image

2016-09-15 Thread Fabian Grünbichler
---
Note: requires the linked clones / list_images cleanup series in pve-storage

 PVE/QemuMigrate.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 6415032..22a49ef 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -317,6 +317,9 @@ sub sync_disks {
die "non-migratable snapshot exists\n";
}
}
+
+   die "referenced by linked clone(s)\n"
+   if PVE::Storage::volume_is_base_and_used($self->{storecfg}, 
$volid);
};
 
my $test_drive = sub {
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] restore: better error handling for vdisk deletion

2016-09-15 Thread Fabian Grünbichler
when restoring into an existing VM, we don't want to die
half-way through because we can't delete one of the existing
volumes. instead, warn about the deletion failure, but
continue anyway. the not deleted disk is then added as
unused automatically.
---
 PVE/QemuServer.pm | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index c3a53c9..dbd85a0 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5343,7 +5343,10 @@ sub restore_vma_archive {
# Note: only delete disk we want to restore
# other volumes will become unused
if ($virtdev_hash->{$ds}) {
-   PVE::Storage::vdisk_free($cfg, $volid);
+   eval { PVE::Storage::vdisk_free($cfg, $volid); };
+   if (my $err = $@) {
+   warn $err;
+   }
}
});
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] forbid restore into existing template

2016-09-15 Thread Fabian Grünbichler
---
 PVE/API2/Qemu.pm | 4 
 1 file changed, 4 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 60d653f..482b8cd 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -490,6 +490,10 @@ __PACKAGE__->register_method({
 
die "unable to restore vm $vmid - vm is running\n"
if PVE::QemuServer::check_running($vmid);
+
+   die "unable to restore vm $vmid - vm is a template\n"
+   if PVE::QemuConfig->is_template($conf);
+
} else {
die "unable to restore vm $vmid - already existing on 
cluster node '$current_node'\n";
}
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ha-manager 3/3] Sim/Hardware: not warn if a not locked gets unlocked

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 4/5] add comments about LVM thin clones

2016-09-15 Thread Fabian Grünbichler
---
 PVE/Storage.pm   | 7 +++
 PVE/Storage/LvmThinPlugin.pm | 9 +
 2 files changed, 16 insertions(+)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 16a835f..6e2e7a6 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -321,6 +321,9 @@ sub parse_vmid {
 return int($vmid);
 }
 
+# NOTE: basename and basevmid are always undef for LVM-thin, where the
+# clone -> base reference is not encoded in the volume ID.
+# see note in PVE::Storage::LvmThinPlugin for details.
 sub parse_volname {
 my ($cfg, $volid) = @_;
 
@@ -367,6 +370,9 @@ my $volume_is_base_and_used = sub {
 return 0;
 };
 
+# NOTE: this check does not work for LVM-thin, where the clone -> base
+# reference is not encoded in the volume ID.
+# see note in PVE::Storage::LvmThinPlugin for details.
 sub volume_is_base_and_used {
 my ($cfg, $volid) = @_;
 
@@ -708,6 +714,7 @@ sub vdisk_free {
 
 # lock shared storage
 $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+   # LVM-thin allows deletion of still referenced base volumes!
die "base volume '$volname' is still in use by linked clones\n"
if &$volume_is_base_and_used($scfg, $storeid, $plugin, $volname);
 
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index c834a22..ccf5b7b 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -15,6 +15,12 @@ use PVE::JSONSchema qw(get_standard_option);
 # lvcreate -n pvepool -L 20G pve
 # lvconvert --type thin-pool pve/pvepool
 
+# NOTE: volumes which were created as linked clones of another base volume
+# are currently not tracking this relationship in their volume IDs. this is
+# generally not a problem, as LVM thin allows deletion of such base volumes
+# without affecting the linked clones. this leads to increased disk usage
+# when migrating LVM-thin volumes, which is normally prevented for linked 
clones.
+
 use base qw(PVE::Storage::LVMPlugin);
 
 sub type {
@@ -46,6 +52,9 @@ sub options {
 };
 }
 
+# NOTE: the fourth and fifth element of the returned array are always
+# undef, even if the volume is a linked clone of another volume. see note
+# at beginning of file.
 sub parse_volname {
 my ($class, $volname) = @_;
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 3/5] harmonize list_images code

2016-09-15 Thread Fabian Grünbichler
---
 PVE/Storage/RBDPlugin.pm |  9 -
 PVE/Storage/ZFSPoolPlugin.pm | 19 +--
 2 files changed, 13 insertions(+), 15 deletions(-)

diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index a965ade..de8751a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -498,19 +498,18 @@ sub list_images {
my $owner = $info->{vmid};
 
if ($parent && $parent =~ m/^(base-\d+-\S+)\@__base__$/) {
-   $volname = "$1/$volname";
+   $info->{volid} = "$storeid:$1/$volname";
+   } else {
+   $info->{volid} = "$storeid:$volname";
}
 
-   my $volid = "$storeid:$volname";
-
if ($vollist) {
-   my $found = grep { $_ eq $volid } @$vollist;
+   my $found = grep { $_ eq $info->{volid} } @$vollist;
next if !$found;
} else {
next if defined ($vmid) && ($owner ne $vmid);
}
 
-   $info->{volid} = $volid;
$info->{format} = 'raw';
 
push @$res, $info;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 91101a2..77ed72c 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -248,27 +248,26 @@ sub list_images {
 
foreach my $image (keys %$dat) {
 
-   my $volname = $dat->{$image}->{name};
-   my $parent = $dat->{$image}->{parent};
+   my $info = $dat->{$image};
 
-   my $volid = undef;
-if ($parent && $parent =~ m/^(\S+)@(\S+)$/) {
+   my $volname = $info->{name};
+   my $parent = $info->{parent};
+   my $owner = $info->{vmid};
+
+   if ($parent && $parent =~ m/^(\S+)\@__base__$/) {
my ($basename) = ($1);
-   $volid = "$storeid:$basename/$volname";
+   $info->{volid} = "$storeid:$basename/$volname";
} else {
-   $volid = "$storeid:$volname";
+   $info->{volid} = "$storeid:$volname";
}
 
-   my $owner = $dat->{$volname}->{vmid};
if ($vollist) {
-   my $found = grep { $_ eq $volid } @$vollist;
+   my $found = grep { $_ eq $info->{volid} } @$vollist;
next if !$found;
} else {
next if defined ($vmid) && ($owner ne $vmid);
}
 
-   my $info = $dat->{$volname};
-   $info->{volid} = $volid;
push @$res, $info;
}
 }
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 2/5] move check for existing clones into own method

2016-09-15 Thread Fabian Grünbichler
and change its return type to boolean
---
 PVE/Storage.pm | 62 ++
 1 file changed, 41 insertions(+), 21 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 273d17d..16a835f 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -341,6 +341,44 @@ sub parse_volume_id {
 return PVE::Storage::Plugin::parse_volume_id($volid, $noerr);
 }
 
+my $volume_is_base_and_used = sub {
+my ($scfg, $storeid, $plugin, $volname) = @_;
+
+my ($vtype, $name, $vmid, undef, undef, $isBase, undef) =
+   $plugin->parse_volname($volname);
+
+if ($isBase) {
+   my $vollist = $plugin->list_images($storeid, $scfg);
+   foreach my $info (@$vollist) {
+   my (undef, $tmpvolname) = parse_volume_id($info->{volid});
+   my $basename = undef;
+   my $basevmid = undef;
+
+   eval{
+   (undef, undef, undef, $basename, $basevmid) =
+   $plugin->parse_volname($tmpvolname);
+   };
+
+   if ($basename && defined($basevmid) && $basevmid == $vmid && 
$basename eq $name) {
+   return 1;
+   }
+   }
+}
+return 0;
+};
+
+sub volume_is_base_and_used {
+my ($cfg, $volid) = @_;
+
+my ($storeid, $volname) = parse_volume_id($volid);
+my $scfg = storage_config($cfg, $storeid);
+my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+$plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+   return &$volume_is_base_and_used($scfg, $storeid, $plugin, $volname);
+});
+}
+
 # try to map a filesystem path to a volume identifier
 sub path_to_volume_id {
 my ($cfg, $path) = @_;
@@ -661,9 +699,7 @@ sub vdisk_free {
 my ($cfg, $volid) = @_;
 
 my ($storeid, $volname) = parse_volume_id($volid);
-
 my $scfg = storage_config($cfg, $storeid);
-
 my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
 
 activate_storage($cfg, $storeid);
@@ -672,27 +708,11 @@ sub vdisk_free {
 
 # lock shared storage
 $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+   die "base volume '$volname' is still in use by linked clones\n"
+   if &$volume_is_base_and_used($scfg, $storeid, $plugin, $volname);
 
-   my ($vtype, $name, $vmid, undef, undef, $isBase, $format) =
+   my (undef, undef, undef, undef, undef, $isBase, $format) =
$plugin->parse_volname($volname);
-   if ($isBase) {
-   my $vollist = $plugin->list_images($storeid, $scfg);
-   foreach my $info (@$vollist) {
-   my (undef, $tmpvolname) = parse_volume_id($info->{volid});
-   my $basename = undef;
-   my $basevmid = undef;
-
-   eval{
-   (undef, undef, undef, $basename, $basevmid) =
-   $plugin->parse_volname($tmpvolname);
-   };
-
-   if ($basename && defined($basevmid) && $basevmid == $vmid && 
$basename eq $name) {
-   die "base volume '$volname' is still in use " .
-   "(used by '$tmpvolname')\n";
-   }
-   }
-   }
$cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, 
$isBase, $format);
 });
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 5/5] fix typo

2016-09-15 Thread Fabian Grünbichler
---
 PVE/Storage/Plugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 8089302..cdc89ba 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -444,7 +444,7 @@ sub create_base {
 my ($class, $storeid, $scfg, $volname) = @_;
 
 # this only works for file based storage types
-die "storage definintion has no path\n" if !$scfg->{path};
+die "storage definition has no path\n" if !$scfg->{path};
 
 my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
$class->parse_volname($volname);
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 0/5] linked clones / list_images cleanup

2016-09-15 Thread Fabian Grünbichler
changes to v1 only in patch #2:
- expose check including lock
- move actual check to private sub
- change return type to bool instead of first found child's volname

Fabian Grünbichler (5):
  remove unused method
  move check for existing clones into own method
  harmonize list_images code
  add comments about LVM thin clones
  fix typo

 PVE/Storage.pm | 76 +++---
 PVE/Storage/LvmThinPlugin.pm   |  9 +
 PVE/Storage/Plugin.pm  |  2 +-
 PVE/Storage/RBDPlugin.pm   |  9 +++--
 PVE/Storage/ZFSPoolPlugin.pm   | 19 +--
 test/run_test_zfspoolplugin.pl | 21 
 6 files changed, 79 insertions(+), 57 deletions(-)

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 storage 1/5] remove unused method

2016-09-15 Thread Fabian Grünbichler
only used by test case, which should use what the rest of
the codebase uses as well
---
 PVE/Storage.pm | 21 -
 test/run_test_zfspoolplugin.pl | 21 +++--
 2 files changed, 15 insertions(+), 27 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 4fcda5a..273d17d 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -341,27 +341,6 @@ sub parse_volume_id {
 return PVE::Storage::Plugin::parse_volume_id($volid, $noerr);
 }
 
-sub volume_is_base {
-my ($cfg, $volid) = @_;
-
-my ($sid, $volname) = parse_volume_id($volid, 1);
-return 0 if !$sid;
-
-if (my $scfg = $cfg->{ids}->{$sid}) {
-   my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
-   my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
-   $plugin->parse_volname($volname);
-   return $isBase ? 1 : 0;
-} else {
-   # stale volid with undefined storage - so we can just guess
-   if ($volid =~ m/base-/) {
-   return 1;
-   }
-}
-
-return 0;
-}
-
 # try to map a filesystem path to a volume identifier
 sub path_to_volume_id {
 my ($cfg, $path) = @_;
diff --git a/test/run_test_zfspoolplugin.pl b/test/run_test_zfspoolplugin.pl
index 2512db9..53d4a15 100755
--- a/test/run_test_zfspoolplugin.pl
+++ b/test/run_test_zfspoolplugin.pl
@@ -818,7 +818,7 @@ my $test10 =sub {
 print "\nrun test10 \"volume_is_base\"\n";
 
 eval {
-   if (1 == PVE::Storage::volume_is_base($cfg, "$storagename:$vmdisk")) {
+   if (1 == volume_is_base($cfg, "$storagename:$vmdisk")) {
$count++;
warn "Test10 a: is no base";
}
@@ -830,7 +830,7 @@ my $test10 =sub {
 }
 
 eval {
-   if (0 == PVE::Storage::volume_is_base($cfg, "$storagename:$vmbase")) {
+   if (0 == volume_is_base($cfg, "$storagename:$vmbase")) {
$count++;
warn "Test10 b: is base";
}
@@ -842,7 +842,7 @@ my $test10 =sub {
 }
 
 eval {
-   if (1 == PVE::Storage::volume_is_base($cfg, 
"$storagename:$vmbase\/$vmlinked")) {
+   if (1 == volume_is_base($cfg, "$storagename:$vmbase\/$vmlinked")) {
$count++;
warn "Test10 c: is no base";
}
@@ -854,7 +854,7 @@ my $test10 =sub {
 }
 
 eval {
-   if (1 == PVE::Storage::volume_is_base($cfg, "$storagename:$ctdisk")) {
+   if (1 == volume_is_base($cfg, "$storagename:$ctdisk")) {
$count++;
warn "Test10 d: is no base";
}
@@ -866,7 +866,7 @@ my $test10 =sub {
 }
 
 eval {
-   if (0 == PVE::Storage::volume_is_base($cfg, "$storagename:$ctbase")) {
+   if (0 == volume_is_base($cfg, "$storagename:$ctbase")) {
$count++;
warn "Test10 e: is base";
}
@@ -878,7 +878,7 @@ my $test10 =sub {
 }
 
 eval {
-   if (1 == PVE::Storage::volume_is_base($cfg, 
"$storagename:$ctbase\/$ctlinked")) {
+   if (1 == volume_is_base($cfg, "$storagename:$ctbase\/$ctlinked")) {
$count++;
warn "Test10 f: is no base";
}
@@ -2640,6 +2640,15 @@ sub clean_up_zpool {
 unlink 'zpool.img';
 }
 
+sub volume_is_base {
+my ($cfg, $volid) = @_;
+
+my (undef, undef, undef, undef, undef, $isBase, undef) = 
PVE::Storage::parse_volname($cfg, $volid);
+
+return $isBase;
+}
+
+
 setup_zpool();
 
 my $time = time;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] change overflowhandler of actionmenu to scroller

2016-09-15 Thread Dominik Csapak
the menu overflowhandler has a few problems
(alignment, no action for some entries)
so we change it to type scroller

Signed-off-by: Dominik Csapak 
---
 www/manager6/panel/ConfigPanel.js | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/www/manager6/panel/ConfigPanel.js 
b/www/manager6/panel/ConfigPanel.js
index fe1a7d1..c8e9485 100644
--- a/www/manager6/panel/ConfigPanel.js
+++ b/www/manager6/panel/ConfigPanel.js
@@ -101,7 +101,7 @@ Ext.define('PVE.panel.Config', {
itemId: 'toolbar',
dock: 'top',
height: 36,
-   overflowHandler: 'menu'
+   overflowHandler: 'scroller'
 }],
 
 firstItem: '',
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ha-manager 2/3] change service state to error if no recovery node is available

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v4 ha-manager 1/3] cleanup backup & mounted locks after recovery (fixes #1100)

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager] check if dir /var/log/pveproxy exists.

2016-09-15 Thread Dietmar Maurer


> On September 15, 2016 at 12:25 PM Wolfgang Link  wrote:
> 
> 
> We will check on every start of pveproxy if the logdir are available.
> If not we make a new one and give www-data permission to this dir.

And you want to do that with every directory?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager] check if dir /var/log/pveproxy exists.

2016-09-15 Thread Wolfgang Link
We will check on every start of pveproxy if the logdir are available.
If not we make a new one and give www-data permission to this dir.

The reason for this is,
if someone remove the directory /var/log/pveproxy pveproxy can't access the log.
This permit the user to use the GUI after a reboot or a restart of the service.
---
 bin/pveproxy | 8 
 1 file changed, 8 insertions(+)

diff --git a/bin/pveproxy b/bin/pveproxy
index 20e8f2a..6a624a8 100755
--- a/bin/pveproxy
+++ b/bin/pveproxy
@@ -19,6 +19,14 @@ $SIG{'__WARN__'} = sub {
 $@ = $err;
 };
 
+my $log = "/var/log/pveproxy";
+
+if (!-d $log) {
+mkdir $log;
+my (undef, undef, $uid, $gid) = getpwnam('www-data');
+chown $uid, $gid, $log;
+}
+
 my $prepare = sub {
 my $rundir="/var/run/pveproxy";
 if (mkdir($rundir, 0700)) { # only works at first start if we are root)
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 1/2] add tooltip hashmap/generator for help button

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 2/2] add missing documentation link to firewall log

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-docs] Document that virtio-scsi is the recommended controller for PVE >= 4.3

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 1/2] add tooltip hashmap/generator for help button

2016-09-15 Thread Dominik Csapak
instead of manually setting the onlineHelpTooltip property
we now have a method which maps links to the docs to
titles

for now this uses a static hashmap, but in the future
we want to generate this by the pve-docs package

also most of the subheaders we can generate instead of
saving them because they simply have each word capitalized
(e.g. '_container_network' => 'Container Network')

Signed-off-by: Dominik Csapak 
---
changes from v1:
use titles instead of command name
for cli tools
 www/manager6/Utils.js | 64 +++
 www/manager6/button/HelpButton.js | 11 ++-
 www/manager6/panel/ConfigPanel.js | 12 ++--
 3 files changed, 76 insertions(+), 11 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index a31beb9..3c15135 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -63,6 +63,40 @@ var HostPort_match = new RegExp("^(" + IPV4_REGEXP + "|" + 
DnsName_REGEXP + ")(:
 var HostPortBrackets_match = new RegExp("^\\[(?:" + IPV6_REGEXP + "|" + 
IPV4_REGEXP + "|" + DnsName_REGEXP + ")\\](:\\d+)?$");
 var IP6_dotnotation_match = new RegExp("^" + IPV6_REGEXP + "(\\.\\d+)?$");
 
+var DocsPages = {
+'pve-admin-guide.html':'Proxmox VE Administration Guide',
+'chapter-sysadmin.html':'Host System Administration',
+'chapter-pvecm.html':'Cluster Manager',
+'chapter-pmxcfs.html':'Proxmox Cluster File System (pmxcfs)',
+'chapter-pvesm.html':'Proxmox VE Storage',
+'chapter-qm.html': 'Qemu/KVM Virtual Machines',
+'chapter-pve-firewall.html': 'Proxmox VE Firewall',
+'chapter-pveum.html': 'User Management',
+'chapter-pct.html': 'Proxmox Container Toolkit',
+'chapter-ha-manager.html': 'High Availability',
+'chapter-vzdump.html': 'Backup and Restore',
+'chapter-pve-faq.html': 'Frequently Asked Questions',
+'chapter-pve-bibliography.html': 'Bibliography',
+'qm.1.html': 'Qemu/KVM Virtual Machine Manager',
+'qmrestore.1.html': 'Restore QemuServer vzdump Backups',
+'pct.1.html': 'Tool to manage Linux Containers (LXC) on Proxmox VE',
+'pveam.1.html': 'Proxmox VE Appliance Manager',
+'pveceph.1.html': 'Manage CEPH Services on Proxmox VE Nodes',
+'pvecm.1.html': 'Proxmox VE Cluster Manager',
+'pveum.1.html': 'Proxmox VE User Manager',
+'pvesm.1.html': 'Proxmox VE Storage Manager',
+'pvesubscription.1.html': 'Proxmox VE Subscription Manager',
+'vzdump.1.html': 'Backup Utility for VMs and Containers',
+'ha-manager.1.html': 'Proxmox VE HA Manager',
+'index.html':'',
+'datacenter.cfg.5.html':'Proxmox VE Datacenter Configuration'
+};
+
+var DocsSubTitles = {
+'_vm_container_configuration':'VM/Container configuration',
+'_ip_aliases':'IP Aliases',
+'_ip_sets':'IP Sets'
+};
 Ext.define('PVE.Utils', { statics: {
 
 // this class only contains static functions
@@ -1275,6 +1309,36 @@ Ext.define('PVE.Utils', { statics: {
}
 
menu.showAt(event.getXY());
+},
+
+mapDocsUrlToTitle: function(url) {
+   var title, subtitle;
+   // if there is a subtitle
+   if (url.indexOf('#') !== -1) {
+   title = DocsPages[url.split('#')[0]] || '';
+   subtitle = DocsSubTitles[url.split('#')[1]];
+
+   // if we do not find the subtitle,
+   // capitalize the beginning of every word
+   // and replace '_' with ' '
+   // e.g.:
+   // '_my_text' -> 'My Text'
+   if (!subtitle) {
+   subtitle = url.split('#')[1].replace(/_(\w)/gi, 
function(match,p1){
+   return ' ' + p1.toUpperCase();
+   }).slice(1);
+   }
+
+   if (title !== '') {
+   title += ' - ';
+   }
+
+   title += subtitle;
+   } else {
+   title = DocsPages[url] || '';
+   }
+
+   return title;
 }
 }});
 
diff --git a/www/manager6/button/HelpButton.js 
b/www/manager6/button/HelpButton.js
index 4c2e07a..5afed1f 100644
--- a/www/manager6/button/HelpButton.js
+++ b/www/manager6/button/HelpButton.js
@@ -21,7 +21,7 @@ Ext.define('PVE.button.Help', {
onPveShowHelp: function(helpLink) {
var me = this.getView();
if (me.listenToGlobalEvent === true) {
-   me.onlineHelp = helpLink;
+   me.setOnlineHelp(helpLink);
me.show();
}
},
@@ -32,6 +32,15 @@ Ext.define('PVE.button.Help', {
}
}
 },
+
+// this sets the link and
+// sets the tooltip text
+setOnlineHelp:function(link) {
+   var me = this;
+   me.onlineHelp = link;
+   me.setTooltip(PVE.Utils.mapDocsUrlToTitle(link));
+},
+
 handler: function() {
var me = this;
if (me.onlineHelp) {
diff --git a/www/manager6/panel/ConfigPanel.js 
b/www/manager6/panel/ConfigPanel.js
index fe1a7d1..84c3c10 100644
--- a/www/manager6/panel/ConfigPanel.js
+++ 

[pve-devel] [PATCH manager v2 2/2] add missing documentation link to firewall log

2016-09-15 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 www/manager6/node/Config.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/node/Config.js b/www/manager6/node/Config.js
index 8159a57..f987fa2 100644
--- a/www/manager6/node/Config.js
+++ b/www/manager6/node/Config.js
@@ -298,6 +298,7 @@ Ext.define('PVE.node.Config', {
title: gettext('Log'),
iconCls: 'fa fa-list',
groups: ['firewall'],
+   onlineHelp: 'chapter-pve-firewall.html',
url: '/api2/extjs/nodes/' + nodename + '/firewall/log',
itemId: 'firewall-fwlog'
},
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 1/2] virtio-scsi-pci as default SCSI for new VMS fix #1106

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 2/2] Use SCSI controller as default for l26 guests fix #1105

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] qmp_snapshot_drive : add aiocontext

2016-09-15 Thread Alexandre Derumier
This fix internal snapshot for drive with iothread enabled

Signed-off-by: Alexandre Derumier 
---
 .../0056-qmp_snapshot_drive-add-aiocontext.patch   | 65 ++
 debian/patches/series  |  1 +
 2 files changed, 66 insertions(+)
 create mode 100644 
debian/patches/pve/0056-qmp_snapshot_drive-add-aiocontext.patch

diff --git a/debian/patches/pve/0056-qmp_snapshot_drive-add-aiocontext.patch 
b/debian/patches/pve/0056-qmp_snapshot_drive-add-aiocontext.patch
new file mode 100644
index 000..8c2d9c9
--- /dev/null
+++ b/debian/patches/pve/0056-qmp_snapshot_drive-add-aiocontext.patch
@@ -0,0 +1,65 @@
+From 61164c3693415d6dce39a7b0cbde43b184081243 Mon Sep 17 00:00:00 2001
+From: Alexandre Derumier 
+Date: Tue, 13 Sep 2016 01:57:56 +0200
+Subject: [PATCH] qmp_snapshot_drive:  add aiocontext
+
+Signed-off-by: Alexandre Derumier 
+---
+ savevm-async.c | 15 +++
+ 1 file changed, 11 insertions(+), 4 deletions(-)
+
+diff --git a/savevm-async.c b/savevm-async.c
+index 6a2266c..308ac61 100644
+--- a/savevm-async.c
 b/savevm-async.c
+@@ -345,6 +345,7 @@ void qmp_snapshot_drive(const char *device, const char 
*name, Error **errp)
+ BlockBackend *blk;
+ BlockDriverState *bs;
+ QEMUSnapshotInfo sn1, *sn = 
++AioContext *aio_context;
+ int ret;
+ #ifdef _WIN32
+ struct _timeb tb;
+@@ -371,20 +372,23 @@ void qmp_snapshot_drive(const char *device, const char 
*name, Error **errp)
+ return;
+ }
+ 
++aio_context = bdrv_get_aio_context(bs);
++aio_context_acquire(aio_context);
++
+ if (bdrv_is_read_only(bs)) {
+ error_setg(errp, "Node '%s' is read only", device);
+-return;
++goto out;
+ }
+ 
+ if (!bdrv_can_snapshot(bs)) {
+ error_setg(errp, QERR_UNSUPPORTED);
+-return;
++goto out;
+ }
+ 
+ if (bdrv_snapshot_find(bs, sn, name) >= 0) {
+ error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+   "snapshot '%s' already exists", name);
+-return;
++goto out;
+ }
+ 
+ sn = 
+@@ -409,8 +413,11 @@ void qmp_snapshot_drive(const char *device, const char 
*name, Error **errp)
+ if (ret < 0) {
+ error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+   "Error while creating snapshot on '%s'\n", device);
+-return;
++goto out;
+ }
++
++out:
++aio_context_release(aio_context);
+ }
+ 
+ void qmp_delete_drive_snapshot(const char *device, const char *name,
+-- 
+2.1.4
+
diff --git a/debian/patches/series b/debian/patches/series
index 5ad7435..d1470ba 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -53,6 +53,7 @@ pve/0052-vnc-refactor-to-QIOChannelSocket.patch
 pve/0053-vma-use-BlockBackend-on-extract.patch
 pve/0054-rbd-disable-rbd_cache_writethrough_until_flush-with-.patch
 pve/0055-enable-cache-unsafe-for-vma-extract_content-and-qmp_.patch
+pve/0056-qmp_snapshot_drive-add-aiocontext.patch
 #see https://bugs.launchpad.net/qemu/+bug/1488363?comments=all
 extra/0001-Revert-target-i386-disable-LINT0-after-reset.patch
 extra/0002-scsi-esp-fix-migration.patch
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v4 ha-manager 1/3] cleanup backup & mounted locks after recovery (fixes #1100)

2016-09-15 Thread Thomas Lamprecht
This cleans up the backup and mounted locks after a service is
recovered from a failed node, else it may not start if a locked
action occurred during the node failure.
We allow deletion of backup and mounted locks as they are secure
to do so if the node which hosted the locked service is now fenced.
We do not allow snapshot lock deletion as this needs more manual
clean up, also they are normally triggered manually.
Further ignore migration locks for now, we should over think that but
as its a manual triggered action for now it should be OK to not
auto delete it.

We do cannot remove locks via the remove_lock method provided by
PVE::AbstractConfig, as this method is well behaved and does not
allows removing locks from VMs/CTs located on another node.  We also
do not want to adapt this method to allow arbitrary lock removing,
independent on which node the config is located, as this could cause
missuse in the future. After all one of our base principles is that
the node owns its VMs/CTs (and there configs) and only the owner
itself may change the status of a VM/CT.

The HA manager needs to be able to change the state of services
when a node failed and is also allowed to do so, but only if the
node is fenced and we need to recover a service from it.

So we (re)implement the remove lock functionality in the resource
plugins.
We call that only if a node was fenced, and only *previous* stealing
the service. After all our implication to remove a lock is that the
owner (the node) is fenced. After stealing it we already changed
owner, and the new owner is *not* fenced and thus our implication
does not hold anymore - the new owner may already do some stuff
with the service (config changes, etc.)

Add the respective log.expect output from the added test to enable
regression testing this issue.

Signed-off-by: Thomas Lamprecht 
---

changes since v3:
* rebased on current master

 src/PVE/HA/Manager.pm| 22 
 src/PVE/HA/Resources.pm  |  6 +
 src/PVE/HA/Resources/PVECT.pm| 23 +
 src/PVE/HA/Resources/PVEVM.pm| 23 +
 src/PVE/HA/Sim/Resources.pm  | 15 +++
 src/test/test-locked-service1/log.expect | 44 
 6 files changed, 133 insertions(+)
 create mode 100644 src/test/test-locked-service1/log.expect

diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index c60df7c..e6dab7a 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -238,6 +238,26 @@ my $change_service_state = sub {
" to '${new_state}'$text_state");
 };
 
+# clean up a possible bad state from a recovered service to allow its start
+my $fence_recovery_cleanup = sub {
+my ($self, $sid, $fenced_node) = @_;
+
+my $haenv = $self->{haenv};
+
+my (undef, $type, $id) = PVE::HA::Tools::parse_sid($sid);
+my $plugin = PVE::HA::Resources->lookup($type);
+
+# should not happen
+die "unknown resource type '$type'" if !$plugin;
+
+# locks may block recovery, cleanup those which are safe to remove after 
fencing
+my $removable_locks = ['backup', 'mounted'];
+if (my $removed_lock = $plugin->remove_locks($haenv, $id, 
$removable_locks, $fenced_node)) {
+   $haenv->log('warning', "removed leftover lock '$removed_lock' from 
recovered " .
+   "service '$sid' to allow its start.");
+}
+};
+
 # after a node was fenced this recovers the service to a new node
 my $recover_fenced_service = sub {
 my ($self, $sid, $cd) = @_;
@@ -264,6 +284,8 @@ my $recover_fenced_service = sub {
$haenv->log('info', "recover service '$sid' from fenced node " .
"'$fenced_node' to node '$recovery_node'");
 
+   &$fence_recovery_cleanup($self, $sid, $fenced_node);
+
$haenv->steal_service($sid, $sd->{node}, $recovery_node);
 
# $sd *is normally read-only*, fencing is the exception
diff --git a/src/PVE/HA/Resources.pm b/src/PVE/HA/Resources.pm
index 3836fc8..96d2f8f 100644
--- a/src/PVE/HA/Resources.pm
+++ b/src/PVE/HA/Resources.pm
@@ -124,6 +124,12 @@ sub check_running {
 die "implement in subclass";
 }
 
+sub remove_locks {
+my ($self, $haenv, $id, $locks, $service_node) = @_;
+
+die "implement in subclass";
+}
+
 
 # package PVE::HA::Resources::IPAddr;
 
diff --git a/src/PVE/HA/Resources/PVECT.pm b/src/PVE/HA/Resources/PVECT.pm
index b6ebe2f..d1312ab 100644
--- a/src/PVE/HA/Resources/PVECT.pm
+++ b/src/PVE/HA/Resources/PVECT.pm
@@ -114,4 +114,27 @@ sub check_running {
 return PVE::LXC::check_running($vmid);
 }
 
+sub remove_locks {
+my ($self, $haenv, $id, $locks, $service_node) = @_;
+
+$service_node = $service_node || $haenv->nodename();
+
+my $conf = PVE::LXC::Config->load_config($id, $service_node);
+
+return undef if !defined($conf->{lock});
+
+foreach my $lock (@$locks) {
+   if ($conf->{lock} eq $lock) {
+   delete 

[pve-devel] [PATCH ha-manager 2/3] change service state to error if no recovery node is available

2016-09-15 Thread Thomas Lamprecht
else we will try again endlessly if the service has no other
possible node where it can run, e.g. if its restricted.

This avoids various problems, especially if a service is configured
to just one node we could never get the service out of the fence
state again without manually hacking the manager status.

Add a regression test for this.

Signed-off-by: Thomas Lamprecht 
---

new patch

 src/PVE/HA/Manager.pm   |  3 ++-
 src/test/test-recovery1/README  |  4 
 src/test/test-recovery1/cmdlist |  4 
 src/test/test-recovery1/groups  |  4 
 src/test/test-recovery1/hardware_status |  5 +
 src/test/test-recovery1/log.expect  | 38 +
 src/test/test-recovery1/manager_status  |  1 +
 src/test/test-recovery1/service_config  |  3 +++
 8 files changed, 61 insertions(+), 1 deletion(-)
 create mode 100644 src/test/test-recovery1/README
 create mode 100644 src/test/test-recovery1/cmdlist
 create mode 100644 src/test/test-recovery1/groups
 create mode 100644 src/test/test-recovery1/hardware_status
 create mode 100644 src/test/test-recovery1/log.expect
 create mode 100644 src/test/test-recovery1/manager_status
 create mode 100644 src/test/test-recovery1/service_config

diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index e6dab7a..e58fc0b 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -292,9 +292,10 @@ my $recover_fenced_service = sub {
$cd->{node} = $sd->{node} = $recovery_node;
&$change_service_state($self, $sid, 'started', node => $recovery_node);
 } else {
-   # no node found, let the service in 'fence' state and try again
+   # no possible node found, cannot recover
$haenv->log('err', "recovering service '$sid' from fenced node " .
"'$fenced_node' failed, no recovery node found");
+   &$change_service_state($self, $sid, 'error');
 }
 };
 
diff --git a/src/test/test-recovery1/README b/src/test/test-recovery1/README
new file mode 100644
index 000..8753ad2
--- /dev/null
+++ b/src/test/test-recovery1/README
@@ -0,0 +1,4 @@
+Test what happens if a service needs to get recovered but
+select_service_node cannot return any possible node.
+
+Avoid endless loops by placing the service in the error state.
diff --git a/src/test/test-recovery1/cmdlist b/src/test/test-recovery1/cmdlist
new file mode 100644
index 000..4e4f36d
--- /dev/null
+++ b/src/test/test-recovery1/cmdlist
@@ -0,0 +1,4 @@
+[
+[ "power node1 on", "power node2 on", "power node3 on"],
+[ "network node2 off" ]
+]
diff --git a/src/test/test-recovery1/groups b/src/test/test-recovery1/groups
new file mode 100644
index 000..06c7f76
--- /dev/null
+++ b/src/test/test-recovery1/groups
@@ -0,0 +1,4 @@
+group: prefer_node2
+   nodes node2
+   restricted 1
+
diff --git a/src/test/test-recovery1/hardware_status 
b/src/test/test-recovery1/hardware_status
new file mode 100644
index 000..451beb1
--- /dev/null
+++ b/src/test/test-recovery1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-recovery1/log.expect 
b/src/test/test-recovery1/log.expect
new file mode 100644
index 000..ffd732a
--- /dev/null
+++ b/src/test/test-recovery1/log.expect
@@ -0,0 +1,38 @@
+info  0 hardware: starting simulation
+info 20  cmdlist: execute power node1 on
+info 20node1/crm: status change startup => wait_for_quorum
+info 20node1/lrm: status change startup => wait_for_agent_lock
+info 20  cmdlist: execute power node2 on
+info 20node2/crm: status change startup => wait_for_quorum
+info 20node2/lrm: status change startup => wait_for_agent_lock
+info 20  cmdlist: execute power node3 on
+info 20node3/crm: status change startup => wait_for_quorum
+info 20node3/lrm: status change startup => wait_for_agent_lock
+info 20node1/crm: got lock 'ha_manager_lock'
+info 20node1/crm: status change wait_for_quorum => master
+info 20node1/crm: node 'node1': state changed from 'unknown' => 
'online'
+info 20node1/crm: node 'node2': state changed from 'unknown' => 
'online'
+info 20node1/crm: node 'node3': state changed from 'unknown' => 
'online'
+info 20node1/crm: adding new service 'vm:102' on node 'node2'
+info 22node2/crm: status change wait_for_quorum => slave
+info 23node2/lrm: got lock 'ha_agent_node2_lock'
+info 23node2/lrm: status change wait_for_agent_lock => active
+info 23node2/lrm: starting service vm:102
+info 23node2/lrm: service status vm:102 started
+info 24node3/crm: status change wait_for_quorum => slave
+info120  cmdlist: execute network node2 off
+info120node1/crm: node 'node2': state changed from 'online' => 

[pve-devel] [PATCH ha-manager 3/3] Sim/Hardware: not warn if a not locked gets unlocked

2016-09-15 Thread Thomas Lamprecht
This adds warninga without value to the regression test stderr
output, does not affects the test itself but can be annoying.

Signed-off-by: Thomas Lamprecht 
---

trivial fix

 src/PVE/HA/Sim/Hardware.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/PVE/HA/Sim/Hardware.pm b/src/PVE/HA/Sim/Hardware.pm
index 2c6b8b6..383b10e 100644
--- a/src/PVE/HA/Sim/Hardware.pm
+++ b/src/PVE/HA/Sim/Hardware.pm
@@ -226,7 +226,6 @@ sub unlock_service {
 die "no such service '$sid'\n" if !$conf->{$sid};
 
 if (!defined($conf->{$sid}->{lock})) {
-   warn "service '$sid' not locked\n";
return undef;
 }
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-docs] Document that virtio-scsi is the recommended controller for PVE >= 4.3

2016-09-15 Thread Emmanuel Kasper
---
 qm.adoc | 23 ---
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index 375cc39..f77bd9d 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -122,18 +122,19 @@ on this controller.
 design, allowing higher throughput and a greater number of devices to be
 connected. You can connect up to 6 devices on this controller.
 
-* the *SCSI* controller, designed in 1985, is commonly found on server
-grade hardware, and can connect up to 14 storage devices. {pve} emulates by
-default a LSI 53C895A controller.
-
-* The *Virtio* controller is a generic paravirtualized controller, and is the
-recommended setting if you aim for performance. To use this controller, the OS
-need to have special drivers which may be included in your installation ISO or
-not. Linux distributions have support for the Virtio controller since 2010, and
+* the *SCSI* controller, designed in 1985, is commonly found on server grade
+hardware, and can connect up to 14 storage devices. {pve} emulates by default a
+LSI 53C895A controller. +
+A SCSI controller of type _Virtio_ is the recommended setting if you aim for
+performance and is automatically selected for newly created Linux VMs since
+{pve} 4.3. Linux distributions have support for this controller since 2012, and
 FreeBSD since 2014. For Windows OSes, you need to provide an extra iso
-containing the Virtio drivers during the installation.
-// see: 
https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
-You can connect up to 16 devices on this controller.
+containing the drivers during the installation.
+// 
https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation.
+
+* The *Virtio* controller, also called virtio-blk to distinguish from
+the Virtio SCSI controller, is an older type of paravirtualized controller
+which has been superseded in features by the Virtio SCSI Controller.
 
 On each controller you attach a number of emulated hard disks, which are backed
 by a file or a block device residing in the configured storage. The choice of
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ha-manager v3 5/6] add check if a service is relocatable and assert it on recovery

2016-09-15 Thread Dietmar Maurer
> Add a basic check if a service is relocatable, i.e. is bound to
> local resources from a node.

I would prefer to restrict possible nodes in advance, for
example in get_service_group?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ha-manager v3 4/6] add possibility to simulate locks from services

2016-09-15 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/2] add tooltip hashmap/generator for help button

2016-09-15 Thread Dominik Csapak
instead of manually setting the onlineHelpTooltip property
we now have a method which maps links to the docs to
titles

for now this uses a static hashmap, but in the future
we want to generate this by the pve-docs package

also most of the subheaders we can generate instead of
saving them because they simply have each word capitalized
(e.g. '_container_network' => 'Container Network')

Signed-off-by: Dominik Csapak 
---
 www/manager6/Utils.js | 64 +++
 www/manager6/button/HelpButton.js | 11 ++-
 www/manager6/panel/ConfigPanel.js | 12 ++--
 3 files changed, 76 insertions(+), 11 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index a31beb9..07c3ef7 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -63,6 +63,40 @@ var HostPort_match = new RegExp("^(" + IPV4_REGEXP + "|" + 
DnsName_REGEXP + ")(:
 var HostPortBrackets_match = new RegExp("^\\[(?:" + IPV6_REGEXP + "|" + 
IPV4_REGEXP + "|" + DnsName_REGEXP + ")\\](:\\d+)?$");
 var IP6_dotnotation_match = new RegExp("^" + IPV6_REGEXP + "(\\.\\d+)?$");
 
+var DocsPages = {
+'pve-admin-guide.html':'Proxmox VE Administration Guide',
+'chapter-sysadmin.html':'Host System Administration',
+'chapter-pvecm.html':'Cluster Manager',
+'chapter-pmxcfs.html':'Proxmox Cluster File System (pmxcfs)',
+'chapter-pvesm.html':'Proxmox VE Storage',
+'chapter-qm.html': 'Qemu/KVM Virtual Machines',
+'chapter-pve-firewall.html': 'Proxmox VE Firewall',
+'chapter-pveum.html': 'User Management',
+'chapter-pct.html': 'Proxmox Container Toolkit',
+'chapter-ha-manager.html': 'High Availability',
+'chapter-vzdump.html': 'Backup and Restore',
+'chapter-pve-faq.html': 'Frequently Asked Questions',
+'chapter-pve-bibliography.html': 'Bibliography',
+'qm.1.html': 'qm',
+'qmrestore.1.html': 'qmrestore',
+'pct.1.html': 'pct',
+'pveam.1.html': 'pveam',
+'pveceph.1.html': 'pveceph',
+'pvecm.1.html': 'pvecm',
+'pveum.1.html': 'pveum',
+'pvesm.1.html': 'pvesm',
+'pvesubscription.1.html': 'pvesubscription',
+'vzdump.1.html': 'vzdump',
+'ha-manager.1.html': 'ha-manager',
+'index.html':'',
+'datacenter.cfg.5.html':'Proxmox VE Datacenter Configuration'
+};
+
+var DocsSubTitles = {
+'_vm_container_configuration':'VM/Container configuration',
+'_ip_aliases':'IP Aliases',
+'_ip_sets':'IP Sets'
+};
 Ext.define('PVE.Utils', { statics: {
 
 // this class only contains static functions
@@ -1275,6 +1309,36 @@ Ext.define('PVE.Utils', { statics: {
}
 
menu.showAt(event.getXY());
+},
+
+mapDocsUrlToTitle: function(url) {
+   var title, subtitle;
+   // if there is a subtitle
+   if (url.indexOf('#') !== -1) {
+   title = DocsPages[url.split('#')[0]] || '';
+   subtitle = DocsSubTitles[url.split('#')[1]];
+
+   // if we do not find the subtitle,
+   // capitalize the beginning of every word
+   // and replace '_' with ' '
+   // e.g.:
+   // '_my_text' -> 'My Text'
+   if (!subtitle) {
+   subtitle = url.split('#')[1].replace(/_(\w)/gi, 
function(match,p1){
+   return ' ' + p1.toUpperCase();
+   }).slice(1);
+   }
+
+   if (title !== '') {
+   title += ' - ';
+   }
+
+   title += subtitle;
+   } else {
+   title = DocsPages[url] || '';
+   }
+
+   return title;
 }
 }});
 
diff --git a/www/manager6/button/HelpButton.js 
b/www/manager6/button/HelpButton.js
index 4c2e07a..5afed1f 100644
--- a/www/manager6/button/HelpButton.js
+++ b/www/manager6/button/HelpButton.js
@@ -21,7 +21,7 @@ Ext.define('PVE.button.Help', {
onPveShowHelp: function(helpLink) {
var me = this.getView();
if (me.listenToGlobalEvent === true) {
-   me.onlineHelp = helpLink;
+   me.setOnlineHelp(helpLink);
me.show();
}
},
@@ -32,6 +32,15 @@ Ext.define('PVE.button.Help', {
}
}
 },
+
+// this sets the link and
+// sets the tooltip text
+setOnlineHelp:function(link) {
+   var me = this;
+   me.onlineHelp = link;
+   me.setTooltip(PVE.Utils.mapDocsUrlToTitle(link));
+},
+
 handler: function() {
var me = this;
if (me.onlineHelp) {
diff --git a/www/manager6/panel/ConfigPanel.js 
b/www/manager6/panel/ConfigPanel.js
index fe1a7d1..84c3c10 100644
--- a/www/manager6/panel/ConfigPanel.js
+++ b/www/manager6/panel/ConfigPanel.js
@@ -113,14 +113,7 @@ Ext.define('PVE.panel.Config', {
if (me.savedItems[cardid]) {
var curcard = me.getLayout().getActiveItem();
var newcard = me.add(me.savedItems[cardid]);
-   me.helpButton.onlineHelp = newcard.onlineHelp || me.onlineHelp;
-   var tooltip = '';
-   if 

[pve-devel] [PATCH manager 2/2] add missing documentation link to firewall log

2016-09-15 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 www/manager6/node/Config.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/node/Config.js b/www/manager6/node/Config.js
index 8159a57..f987fa2 100644
--- a/www/manager6/node/Config.js
+++ b/www/manager6/node/Config.js
@@ -298,6 +298,7 @@ Ext.define('PVE.node.Config', {
title: gettext('Log'),
iconCls: 'fa fa-list',
groups: ['firewall'],
+   onlineHelp: 'chapter-pve-firewall.html',
url: '/api2/extjs/nodes/' + nodename + '/firewall/log',
itemId: 'firewall-fwlog'
},
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/2] virtio-scsi-pci as default SCSI for new VMS fix #1106

2016-09-15 Thread Emmanuel Kasper
---
 www/manager6/qemu/HDEdit.js | 4 
 www/manager6/qemu/OSDefaults.js | 3 ++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/www/manager6/qemu/HDEdit.js b/www/manager6/qemu/HDEdit.js
index c86ab44..312b218 100644
--- a/www/manager6/qemu/HDEdit.js
+++ b/www/manager6/qemu/HDEdit.js
@@ -111,6 +111,10 @@ Ext.define('PVE.qemu.HDInputPanel', {

params[confid] = PVE.Parser.printQemuDrive(me.drive);

+   if (me.insideWizard) {
+   params.scsihw = PVE.qemu.OSDefaults.generic.scsihw;
+   }
+
return params;  
 },
 
diff --git a/www/manager6/qemu/OSDefaults.js b/www/manager6/qemu/OSDefaults.js
index 3a834fa..7ebfef0 100644
--- a/www/manager6/qemu/OSDefaults.js
+++ b/www/manager6/qemu/OSDefaults.js
@@ -36,7 +36,8 @@ Ext.define('PVE.qemu.OSDefaults', {
// default values
me.generic = {
busType: 'ide',
-   networkCard: 'e1000'
+   networkCard: 'e1000',
+   scsihw: 'virtio-scsi-pci'
};
 
// both of them are in kernel since 2.6.25
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/2] Use SCSI controller as default for l26 guests fix #1105

2016-09-15 Thread Emmanuel Kasper
NB: This is only for new created VMs.
---
 www/manager6/qemu/OSDefaults.js | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/www/manager6/qemu/OSDefaults.js b/www/manager6/qemu/OSDefaults.js
index 7ebfef0..dc13eeb 100644
--- a/www/manager6/qemu/OSDefaults.js
+++ b/www/manager6/qemu/OSDefaults.js
@@ -40,11 +40,12 @@ Ext.define('PVE.qemu.OSDefaults', {
scsihw: 'virtio-scsi-pci'
};
 
-   // both of them are in kernel since 2.6.25
+   // virtio-net is in kernel since 2.6.25
+   // virtio-scsi since 3.2 but backported in RHEL with 2.6 kernel
addOS({
pveOS: 'l26',
parent : 'generic',
-   busType: 'virtio',
+   busType: 'scsi',
networkCard: 'virtio'
});
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel