Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

2016-10-17 Thread Dietmar Maurer

> Another possibility : create a new vmid on target host, with his own config. 
> Like this user can manage old and new vm after migration . and If something
> crash during the migration,this is more easier.
> 
> The only drawback is that we can't mix local && shared storage in this case.
> (but I think that if user have shared storage, he don't have need of local
> storage migration)

Not sure if this is a good restriction (Maybe this is a major use case).

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

2016-10-17 Thread Alexandre DERUMIER
>>Another possibility : create a new vmid on target host, with his own config. 
>>Like this user can manage old and new vm after migration . and If something 
>>crash during the migration,this is more easier

We could adapt vm_clone to be able to use remote local storage. 

then reuse vm_clone in vm migration.


- Mail original -
De: "aderumier" 
À: "pve-devel" 
Envoyé: Mardi 18 Octobre 2016 01:05:16
Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

>>setting new drive in pending until the whole migration is done, so user can 
>>use revert ? 
>>I think it should done manually by user, because maybe user don't want to 
>>loose new datas written to target storage. 


Another possibility : create a new vmid on target host, with his own config. 
Like this user can manage old and new vm after migration . and If something 
crash during the migration,this is more easier. 

The only drawback is that we can't mix local && shared storage in this case. 
(but I think that if user have shared storage, he don't have need of local 
storage migration) 



- Mail original - 
De: "aderumier"  
À: "Wolfgang Bumiller"  
Cc: "pve-devel"  
Envoyé: Lundi 17 Octobre 2016 15:52:14 
Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration 

>>So we'd need a way to switch back, then again the remote side might be 
>>dead at this point... we could try though? 

setting new drive in pending until the whole migration is done, so user can use 
revert ? 
I think it should done manually by user, because maybe user don't want to loose 
new datas written to target storage. 


About multiple disks, I thinked that we can't use transactions, 
but reading the livirt mailing it seem possible to launch multiple drive-mirror 
in parallel 
https://www.redhat.com/archives/libvir-list/2015-April/msg00727.html 



- Mail original - 
De: "Wolfgang Bumiller"  
À: "aderumier"  
Cc: "pve-devel"  
Envoyé: Lundi 17 Octobre 2016 15:41:49 
Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration 

On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote: 
> >>considering some of the code looks like it's prepared for multiple 
> >>disks, I wonder if the remote side should send a mapping containing the 
> >>old + new names? 
> 
> yes, I think it can prepare output for multiple disks, it'll be easier later. 
> Maybe simply send multiple lines, 1 by disk ? 
> 
> 
> 
> > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, 
> > $nbd_uri, $vmid); 
> > + #update config 
> 
> >>As far as I can see you have qemu running on the remote side already, 
> >>since you use hmp/mon commnads to export the nbd devices, so it seems 
> >>it would be a better choice to update this after the migration has 
> >>completed, and change the cleanup code below to detach the nbd drive. 
> 
> The problem is that when drive_mirror is finished, the source vm write to the 
> remote disk. 

So we'd need a way to switch back, then again the remote side might be 
dead at this point... we could try though? 

> So I think it's better to update config, to have the new disk in config. 
> If source host die before end of livemigration, user only need to move the 
> config file to destination host. 
> 
> 
> Alternativly, I was thinking to use pending to store new local disk path, and 
> switch at the end of the live migration. 
> Maybe add a node=xxx option to drive during the migration, to known where the 
> local disk is located exactly. 
> I'm not sure... 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

2016-10-17 Thread Alexandre DERUMIER
>>setting new drive in pending until the whole migration is done, so user can 
>>use revert ?
>>I think it should done manually by user, because maybe user don't want to 
>>loose new datas written to target storage.


Another possibility : create a new vmid on target host, with his own config. 
Like this user can manage old and new vm after migration . and If something 
crash during the migration,this is more easier.

The only drawback is that we can't mix local && shared storage in this case. 
(but I think that if user have shared storage, he don't have need of local 
storage migration)



- Mail original -
De: "aderumier" 
À: "Wolfgang Bumiller" 
Cc: "pve-devel" 
Envoyé: Lundi 17 Octobre 2016 15:52:14
Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

>>So we'd need a way to switch back, then again the remote side might be 
>>dead at this point... we could try though? 

setting new drive in pending until the whole migration is done, so user can use 
revert ? 
I think it should done manually by user, because maybe user don't want to loose 
new datas written to target storage. 


About multiple disks, I thinked that we can't use transactions, 
but reading the livirt mailing it seem possible to launch multiple drive-mirror 
in parallel 
https://www.redhat.com/archives/libvir-list/2015-April/msg00727.html 



- Mail original - 
De: "Wolfgang Bumiller"  
À: "aderumier"  
Cc: "pve-devel"  
Envoyé: Lundi 17 Octobre 2016 15:41:49 
Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration 

On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote: 
> >>considering some of the code looks like it's prepared for multiple 
> >>disks, I wonder if the remote side should send a mapping containing the 
> >>old + new names? 
> 
> yes, I think it can prepare output for multiple disks, it'll be easier later. 
> Maybe simply send multiple lines, 1 by disk ? 
> 
> 
> 
> > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, 
> > $nbd_uri, $vmid); 
> > + #update config 
> 
> >>As far as I can see you have qemu running on the remote side already, 
> >>since you use hmp/mon commnads to export the nbd devices, so it seems 
> >>it would be a better choice to update this after the migration has 
> >>completed, and change the cleanup code below to detach the nbd drive. 
> 
> The problem is that when drive_mirror is finished, the source vm write to the 
> remote disk. 

So we'd need a way to switch back, then again the remote side might be 
dead at this point... we could try though? 

> So I think it's better to update config, to have the new disk in config. 
> If source host die before end of livemigration, user only need to move the 
> config file to destination host. 
> 
> 
> Alternativly, I was thinking to use pending to store new local disk path, and 
> switch at the end of the live migration. 
> Maybe add a node=xxx option to drive during the migration, to known where the 
> local disk is located exactly. 
> I'm not sure... 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC] allow automatic VMID selection on create

2016-10-17 Thread Thomas Lamprecht
Using a dot (.) as a VMID will automatically select an available one,
this can be helpful for mass VM creation or may be simply just
convenient.

Signed-off-by: Thomas Lamprecht 
---
 PVE/API2/Qemu.pm | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 21fbebb..0665b4c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -368,7 +368,7 @@ __PACKAGE__->register_method({
properties => PVE::QemuServer::json_config_properties(
{
node => get_standard_option('pve-node'),
-   vmid => get_standard_option('pve-vmid', { completion => 
\::Cluster::complete_next_vmid }),
+   vmid => get_standard_option('pve-vmid-optional', { completion 
=> \::Cluster::complete_next_vmid}),
archive => {
description => "The backup file.",
type => 'string',
@@ -414,6 +414,11 @@ __PACKAGE__->register_method({
 
my $vmid = extract_param($param, 'vmid');
 
+   if ($vmid eq '.') {
+   $vmid = PVE::Cluster::next_unused_vmid(100, 10, 1);
+   print "creating VM with VMID '$vmid'\n" if $rpcenv->{type} eq 'cli';
+   }
+
my $archive = extract_param($param, 'archive');
 
my $storage = extract_param($param, 'storage');
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] improve simultaneous free VMID api calls (#889)

2016-10-17 Thread Thomas Lamprecht
The /cluster/nextid call is not thread safe, when making calls in
(quasi) parallel the callee may get overlapping VMIDs and then only
the first one actually writing the config to the pmxcfs "wins" and
may use it.

Use the new 'next_unused_vmid' from the cluster package to improve
this. It not only avoids the race condition in the method itself but
also reserves the VMID temporary (60s) so that the callee may take
some time between calling nextid and actually write the config.

Signed-off-by: Thomas Lamprecht 
---
 PVE/API2/Cluster.pm | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
index 331e2f7..5172b93 100644
--- a/PVE/API2/Cluster.pm
+++ b/PVE/API2/Cluster.pm
@@ -450,6 +450,12 @@ __PACKAGE__->register_method({
additionalProperties => 0,
properties => {
vmid => get_standard_option('pve-vmid', {optional => 1}),
+   reserve => {
+   type => 'boolean',
+   optional => 1,
+   default => 0,
+   description => "Reserve the returned VMID temporary (only 
reserved in nextid calls). Useful when creating VMs in parallel.",
+   }
},
 },
 returns => {
@@ -467,11 +473,7 @@ __PACKAGE__->register_method({
raise_param_exc({ vmid => "VM $vmid already exists" });
}
 
-   for (my $i = 100; $i < 1; $i++) {
-   return $i if !defined($idlist->{$i});
-   }
-
-   die "unable to get any free VMID\n";
+   return PVE::Cluster::next_unused_vmid(100, 10, $param->{reserve});
 }});
 
 1;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Fix Bug #889 and allow automatic VMID selection on create

2016-10-17 Thread Thomas Lamprecht
The get /cluster/nextid API call is not secured against race conditions and
parallel accesses in general.
Users of the API which created multiple CT in parallel or at least very fast
after each other run into problems here: multiple calls got the same VMID
returned.

Fix this by allowing the /cluster/nextid call to virtually reserve VMIDs
temporary.
Also add the possibility that the create_vm calls from CTs and VMs can auto
select a VMID, for convenience (also it was requested as a feature).

This means changes on a few packages (cluster, common, container,
qemu-server)

The first to patches should be quite straight forward and relevant for the
bug fix.

The other three may be seen as RFC.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC common] add 'pve-vmid-optional' format and standard option

2016-10-17 Thread Thomas Lamprecht
This will be used in cases where the VMID may not be important. For
example some users do not care which VMID a CT/VM gets, they just
want a CT/VM.

Signed-off-by: Thomas Lamprecht 
---
 src/PVE/JSONSchema.pm | 16 
 1 file changed, 16 insertions(+)

diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index caeefe2..7a4a601 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -56,6 +56,11 @@ register_standard_option('pve-vmid', {
 minimum => 1
 });
 
+register_standard_option('pve-vmid-optional', {
+description => "The (unique) ID of the VM or a '.' to autogenerate one.",
+type => 'string', format => 'pve-vmid-optional',
+});
+
 register_standard_option('pve-node', {
 description => "The cluster node name.",
 type => 'string', format => 'pve-node',
@@ -155,6 +160,17 @@ sub pve_verify_vmid {
 return $vmid;
 }
 
+register_format('pve-vmid-optional', \_verify_vmid_optional);
+sub pve_verify_vmid_optional {
+my ($vmid, $noerr) = @_;
+
+if ($vmid !~ m/^([1-9][0-9]{2,8}|\.)$/) {
+   return undef if $noerr;
+   die "value does not look like a valid optional VM ID\n";
+}
+return $vmid;
+}
+
 register_format('pve-node', \_verify_node_name);
 sub pve_verify_node_name {
 my ($node, $noerr) = @_;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC container] allow automatic VMID selection on create

2016-10-17 Thread Thomas Lamprecht
Using a dot (.) as a VMID will automatically select an available one,
this can be helpful for mass CT creation or may be simply just
convenient.

Signed-off-by: Thomas Lamprecht 
---
 src/PVE/API2/LXC.pm | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 15ebb87..7a0b97a 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -109,7 +109,7 @@ __PACKAGE__->register_method({
additionalProperties => 0,
properties => PVE::LXC::Config->json_config_properties({
node => get_standard_option('pve-node'),
-   vmid => get_standard_option('pve-vmid', { completion => 
\::Cluster::complete_next_vmid }),
+   vmid => get_standard_option('pve-vmid-optional', { completion => 
\::Cluster::complete_next_vmid }),
ostemplate => {
description => "The OS template or backup file.",
type => 'string',
@@ -170,6 +170,11 @@ __PACKAGE__->register_method({
 
my $vmid = extract_param($param, 'vmid');
 
+   if ($vmid eq '.') {
+   $vmid = PVE::Cluster::next_unused_vmid(100, 10, 1);
+   print "creating CT with VMID '$vmid'\n" if $rpcenv->{type} eq 'cli';
+   }
+
my $ignore_unpack_errors = extract_param($param, 
'ignore-unpack-errors');
 
my $basecfg_fn = PVE::LXC::Config->config_file($vmid);
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH cluster] add next_unused_vmid

2016-10-17 Thread Thomas Lamprecht
This can be used to get a unused VMID in a thread safe way, also the
new VMID can be reserved temporary (60s for now) so that multiple
calls to the API at the same time, which often first request a VMID
and then, in a later moment reserve it actually thorugh writing the
VMID.conf file, do not get in conflict with each other.

The implemented method is similar to the next_unused_port methods
from the PVE::Tools package, with the distinction that we have the
file for reserved VMIDs on the cluster FS as VMIDs are a cluster wide
resource.

This allows us to address bug #889

Signed-off-by: Thomas Lamprecht 
---
 data/PVE/Cluster.pm | 68 +
 data/src/status.c   |  1 +
 2 files changed, 69 insertions(+)

diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index fe741cf..be569ad 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@ -75,6 +75,7 @@ my $observed = {
 'ha/groups.cfg' => 1,
 'ha/fence.cfg' => 1,
 'status.cfg' => 1,
+'.pve-reserved-vmids' => 1,
 };
 
 # only write output if something fails
@@ -996,6 +997,73 @@ sub check_vmid_unused {
 die "$vmtypestr $vmid already exists on node '$d->{node}'\n";
 }
 
+cfs_register_file(".pve-reserved-vmids",
+ sub { my ($fn, $raw) = @_; return defined($raw) ? $raw : ''; 
},
+ sub { my ($fn, $raw) = @_; return $raw; });
+
+sub next_unused_vmid {
+my ($range_start, $range_end, $reserve) = @_;
+
+# We use a file to register allocated VMIDs
+# Those registrations expires after $expiretime.
+# We use this to avoid race conditions between
+# allocation and use of VMIDs.
+
+my $file = ".pve-reserved-vmids";
+
+my $code = sub {
+
+   my $expiretime = 60;
+   my $ctime = time();
+
+   my $reserved_vmids = {};
+
+   my $vmlist = get_vmlist() || {};
+   my $idlist = $vmlist->{ids} || {};
+
+   if (my $raw = cfs_read_file($file)) {
+   while ($raw =~ /^\h*(.*?)\h*$/gm) {
+   my $line = $1;
+   if ($line =~ m/^(\d+)\s(\d+)$/) {
+   my ($vmid, $timestamp) = ($1, $2);
+   if (($timestamp + $expiretime) > $ctime) {
+   $reserved_vmids->{$vmid} = $timestamp; # not expired
+   }
+   }
+   }
+   }
+
+   my $new_vmid;
+   for (my $vmid = $range_start; $vmid < $range_end; $vmid++) {
+   next if $reserved_vmids->{$vmid};
+
+   if (!defined($idlist->{$vmid})) {
+   $new_vmid = $vmid;
+   $reserved_vmids->{$vmid} = $ctime;
+   last;
+   }
+   }
+
+   if ($reserve) {
+   my $data = "";
+   foreach my $vmid (keys %$reserved_vmids) {
+   $data .= "$vmid $reserved_vmids->{$vmid}\n";
+   }
+
+   cfs_write_file($file, $data);
+   }
+
+   return $new_vmid;
+};
+
+my $vmid = cfs_lock_file($file, 10, $code);
+die $@ if $@;
+
+die "unable to get any free VMID\n" if !$vmid;
+
+return $vmid;
+}
+
 sub check_node_exists {
 my ($nodename, $noerr) = @_;
 
diff --git a/data/src/status.c b/data/src/status.c
index 3896fcb..42ac5d4 100644
--- a/data/src/status.c
+++ b/data/src/status.c
@@ -89,6 +89,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "ha/groups.cfg" },
{ .path = "ha/fence.cfg" },
{ .path = "status.cfg" },
+   { .path = ".reserved-vmids" },
 };
 
 static GMutex mutex;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

On 10/17/2016 05:04 PM, Dietmar Maurer wrote:



IMHO such lists can be quite long, so how do you plan to display
a long lists here?



as it is now, it would simply linewrap and make the boxes bigger,
but yes this is a good point, i have to experiment a little with this

imho offline nodes won't be too many i think, and ha error vms should
also not be that much?

if you have that many vms in error state, i think you have bigger
problems than how the list is displayed, or not?


I would not design such problematic GUI. We can easily display
vms/nodes in error state elsewhere?



hmm the nodes are already listed in the grid on the bottom (including 
online status),

we could add another box with the ha error vms?

or we could display the status in the tree on the left (as an icon)
but this would mean we would have to embed the ha status in the 
/cluster/resources api call


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dietmar Maurer

> > IMHO such lists can be quite long, so how do you plan to display
> > a long lists here?
> >
> 
> as it is now, it would simply linewrap and make the boxes bigger,
> but yes this is a good point, i have to experiment a little with this
> 
> imho offline nodes won't be too many i think, and ha error vms should 
> also not be that much?
> 
> if you have that many vms in error state, i think you have bigger 
> problems than how the list is displayed, or not?

I would not design such problematic GUI. We can easily display
vms/nodes in error state elsewhere?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dietmar Maurer
> https://www.pictshare.net/cb2c08d9ca.png

Seems you try to display lists of Nodes/Guests in:

Offline Nodes:
Guest with errors:

IMHO such lists can be quite long, so how do you plan to display
a long lists here?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

On 10/17/2016 04:52 PM, Dietmar Maurer wrote:

https://www.pictshare.net/cb2c08d9ca.png


Seems you try to display lists of Nodes/Guests in:

Offline Nodes:
Guest with errors:

IMHO such lists can be quite long, so how do you plan to display
a long lists here?



as it is now, it would simply linewrap and make the boxes bigger,
but yes this is a good point, i have to experiment a little with this

imho offline nodes won't be too many i think, and ha error vms should 
also not be that much?


if you have that many vms in error state, i think you have bigger 
problems than how the list is displayed, or not?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] bibliography: add Ceph Cookbook

2016-10-17 Thread Dietmar Maurer
This book is already in the list. Maybe we can remove the older edition?

> On October 17, 2016 at 1:31 PM Fabian Grünbichler 
> wrote:
> 
> 
> ---
>  pve-bibliography.adoc | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/pve-bibliography.adoc b/pve-bibliography.adoc
> index d721c3d..e1fc280 100644
> --- a/pve-bibliography.adoc
> +++ b/pve-bibliography.adoc
> @@ -62,6 +62,11 @@ endif::manvolnum[]
>Packt Publishing, 2015.
>ISBN 978-1783985623
>  
> +- [[[Singh16]]] Karan Signh.
> +  'Ceph Cookbook'
> +  Packt Publishing, 2016.
> +  ISBN 978-1784393502
> +
>  - [[[Mauerer08]]] Wolfgang Mauerer.
>'Professional Linux Kernel Architecture'.
>John Wiley & Sons, 2008.
> -- 
> 2.1.4
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Feedback wanted - Cluster Dashboard

2016-10-17 Thread Dominik Csapak

Hi all,

i am currently working on a cluster dashboard,
and wanted to get feedback from you all.

please be aware that this is a mockup only, no functionality yet,
so no patches for now

i will also post it on the forum (maybe tomorrow) to get
additional feedback

please discuss :)

ps: for clarification:

the values under cluster resources are for the whole cluster,
so cpu is the node average
memory is summed up over all nodes
and storages are also summed up over all nodes (but each distinct 
storage is only counted once)


and yes i know that the status on top is not consistent with the one
on the bottom :P

https://www.pictshare.net/cb2c08d9ca.png

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

2016-10-17 Thread Alexandre DERUMIER
>>So we'd need a way to switch back, then again the remote side might be
>>dead at this point... we could try though?

setting new drive in pending until the whole migration is done, so user can use 
revert ?
I think it should done manually by user, because maybe user don't want to loose 
new datas written to target storage.


About multiple disks, I thinked that we can't use transactions,
but reading the livirt mailing it seem possible to launch multiple drive-mirror 
in parallel
https://www.redhat.com/archives/libvir-list/2015-April/msg00727.html



- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Lundi 17 Octobre 2016 15:41:49
Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote: 
> >>considering some of the code looks like it's prepared for multiple 
> >>disks, I wonder if the remote side should send a mapping containing the 
> >>old + new names? 
> 
> yes, I think it can prepare output for multiple disks, it'll be easier later. 
> Maybe simply send multiple lines, 1 by disk ? 
> 
> 
> 
> > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, 
> > $nbd_uri, $vmid); 
> > + #update config 
> 
> >>As far as I can see you have qemu running on the remote side already, 
> >>since you use hmp/mon commnads to export the nbd devices, so it seems 
> >>it would be a better choice to update this after the migration has 
> >>completed, and change the cleanup code below to detach the nbd drive. 
> 
> The problem is that when drive_mirror is finished, the source vm write to the 
> remote disk. 

So we'd need a way to switch back, then again the remote side might be 
dead at this point... we could try though? 

> So I think it's better to update config, to have the new disk in config. 
> If source host die before end of livemigration, user only need to move the 
> config file to destination host. 
> 
> 
> Alternativly, I was thinking to use pending to store new local disk path, and 
> switch at the end of the live migration. 
> Maybe add a node=xxx option to drive during the migration, to known where the 
> local disk is located exactly. 
> I'm not sure... 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

2016-10-17 Thread Wolfgang Bumiller
On Mon, Oct 17, 2016 at 03:33:38PM +0200, Alexandre DERUMIER wrote:
> >>considering some of the code looks like it's prepared for multiple
> >>disks, I wonder if the remote side should send a mapping containing the
> >>old + new names?
> 
> yes, I think it can prepare output for multiple disks, it'll be easier later.
> Maybe simply send multiple lines, 1 by disk ?
> 
> 
> 
> > + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, 
> > $nbd_uri, $vmid); 
> > + #update config 
> 
> >>As far as I can see you have qemu running on the remote side already, 
> >>since you use hmp/mon commnads to export the nbd devices, so it seems 
> >>it would be a better choice to update this after the migration has 
> >>completed, and change the cleanup code below to detach the nbd drive. 
> 
> The problem is that when drive_mirror is finished, the source vm write to the 
> remote disk.

So we'd need a way to switch back, then again the remote side might be
dead at this point... we could try though?

> So I think it's better to update config, to have the new disk in config.
> If source host die before end of livemigration, user only need to move the 
> config file to destination host.
> 
> 
> Alternativly, I was thinking to use pending to store new local disk path, and 
> switch at the end of the live migration.
> Maybe add a node=xxx option to drive during the migration, to known where the 
> local disk is located exactly.
> I'm not sure...

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

2016-10-17 Thread Alexandre DERUMIER
>>considering some of the code looks like it's prepared for multiple
>>disks, I wonder if the remote side should send a mapping containing the
>>old + new names?

yes, I think it can prepare output for multiple disks, it'll be easier later.
Maybe simply send multiple lines, 1 by disk ?



> + PVE::QemuServer::qemu_drive_mirror($vmid, $self->{target_drive}, $nbd_uri, 
> $vmid); 
> + #update config 

>>As far as I can see you have qemu running on the remote side already, 
>>since you use hmp/mon commnads to export the nbd devices, so it seems 
>>it would be a better choice to update this after the migration has 
>>completed, and change the cleanup code below to detach the nbd drive. 

The problem is that when drive_mirror is finished, the source vm write to the 
remote disk.
So I think it's better to update config, to have the new disk in config.
If source host die before end of livemigration, user only need to move the 
config file to destination host.


Alternativly, I was thinking to use pending to store new local disk path, and 
switch at the end of the live migration.
Maybe add a node=xxx option to drive during the migration, to known where the 
local disk is located exactly.
I'm not sure...








- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Lundi 17 Octobre 2016 15:16:39
Objet: Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

On Tue, Oct 11, 2016 at 04:45:19PM +0200, Alexandre Derumier wrote: 
> This allow to migrate a local storage (only 1 for now) to a remote node 
> storage. 
> 
> When the target node start, a new volume is created and exposed through qemu 
> embedded nbd server. 
> 
> qemu drive-mirror is done on source vm with nbd server as target. 
> 
> when drive-mirror is done, the source vm is running the disk though nbd. 
> 
> Then we live migration the vm to destination node. 
> 
> Signed-off-by: Alexandre Derumier  
> --- 
> PVE/API2/Qemu.pm | 7 + 
> PVE/QemuMigrate.pm | 91 
> +++--- 
> 2 files changed, 93 insertions(+), 5 deletions(-) 
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm 
> index 21fbebb..acb1412 100644 
> --- a/PVE/API2/Qemu.pm 
> +++ b/PVE/API2/Qemu.pm 
> @@ -2648,6 +2648,10 @@ __PACKAGE__->register_method({ 
> description => "Allow to migrate VMs which use local devices. Only root may 
> use this option.", 
> optional => 1, 
> }, 
> + targetstorage => get_standard_option('pve-storage-id', { 
> + description => "Target storage.", 
> + optional => 1, 
> + }), 
> }, 
> }, 
> returns => { 
> @@ -2674,6 +2678,9 @@ __PACKAGE__->register_method({ 
> 
> my $vmid = extract_param($param, 'vmid'); 
> 
> + raise_param_exc({ targetstorage => "Live Storage migration can only be done 
> online" }) 
> + if !$param->{online} && $param->{targetstorage}; 
> + 
> raise_param_exc({ force => "Only root may use this option." }) 
> if $param->{force} && $authuser ne 'root@pam'; 
> 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm 
> index 22a49ef..6e90296 100644 
> --- a/PVE/QemuMigrate.pm 
> +++ b/PVE/QemuMigrate.pm 
> @@ -170,9 +170,11 @@ sub prepare { 
> $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf); 
> 
> } 
> - 
> if (my $loc_res = PVE::QemuServer::check_local_resources($conf, 1)) { 
> - if ($self->{running} || !$self->{opts}->{force}) { 
> + if ($self->{running} && $self->{opts}->{targetstorage}){ 
> + $self->log('info', "migrating VM with online storage migration"); 
> + } 
> + elsif ($self->{running} || !$self->{opts}->{force} ) { 
> die "can't migrate VM which uses local devices\n"; 
> } else { 
> $self->log('info', "migrating VM which uses local devices"); 
> @@ -182,12 +184,16 @@ sub prepare { 
> my $vollist = PVE::QemuServer::get_vm_volumes($conf); 
> 
> my $need_activate = []; 
> + my $unsharedcount = 0; 
> foreach my $volid (@$vollist) { 
> my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1); 
> 
> # check if storage is available on both nodes 
> my $scfg = PVE::Storage::storage_check_node($self->{storecfg}, $sid); 
> - PVE::Storage::storage_check_node($self->{storecfg}, $sid, $self->{node}); 
> + my $targetsid = $sid; 
> + $targetsid = $self->{opts}->{targetstorage} if 
> $self->{opts}->{targetstorage}; 
> + 
> + PVE::Storage::storage_check_node($self->{storecfg}, $targetsid, 
> $self->{node}); 
> 
> if ($scfg->{shared}) { 
> # PVE::Storage::activate_storage checks this for non-shared storages 
> @@ -197,9 +203,12 @@ sub prepare { 
> } else { 
> # only activate if not shared 
> push @$need_activate, $volid; 
> + $unsharedcount++; 
> } 
> } 
> 
> + die "online storage migration don't support more than 1 local disk 
> currently" if $unsharedcount > 1; 
> + 
> # activate volumes 
> PVE::Storage::activate_volumes($self->{storecfg}, $need_activate); 
> 
> @@ -407,7 +416,7 @@ sub phase1 { 
> $conf->{lock} = 'migrate'; 
> 

Re: [pve-devel] [PATCH 1/3] add live storage migration with vm migration

2016-10-17 Thread Wolfgang Bumiller
On Tue, Oct 11, 2016 at 04:45:19PM +0200, Alexandre Derumier wrote:
> This allow to migrate a local storage (only 1 for now) to a remote node 
> storage.
> 
> When the target node start, a new volume is created and exposed through qemu 
> embedded nbd server.
> 
> qemu drive-mirror is done on source vm with nbd server as target.
> 
> when drive-mirror is done, the source vm is running the disk though nbd.
> 
> Then we live migration the vm to destination node.
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/API2/Qemu.pm   |  7 +
>  PVE/QemuMigrate.pm | 91 
> +++---
>  2 files changed, 93 insertions(+), 5 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 21fbebb..acb1412 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -2648,6 +2648,10 @@ __PACKAGE__->register_method({
>   description => "Allow to migrate VMs which use local devices. 
> Only root may use this option.",
>   optional => 1,
>   },
> + targetstorage => get_standard_option('pve-storage-id', {
> +description => "Target storage.",
> +optional => 1,
> + }),
>   },
>  },
>  returns => {
> @@ -2674,6 +2678,9 @@ __PACKAGE__->register_method({
>  
>   my $vmid = extract_param($param, 'vmid');
>  
> + raise_param_exc({ targetstorage => "Live Storage migration can only be 
> done online" })
> + if !$param->{online} && $param->{targetstorage};
> +
>   raise_param_exc({ force => "Only root may use this option." })
>   if $param->{force} && $authuser ne 'root@pam';
>  
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 22a49ef..6e90296 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -170,9 +170,11 @@ sub prepare {
>   $self->{forcemachine} = PVE::QemuServer::qemu_machine_pxe($vmid, $conf);
>  
>  }
> -
>  if (my $loc_res = PVE::QemuServer::check_local_resources($conf, 1)) {
> - if ($self->{running} || !$self->{opts}->{force}) {
> + if ($self->{running} && $self->{opts}->{targetstorage}){
> + $self->log('info', "migrating VM with online storage migration");
> + }
> + elsif ($self->{running} || !$self->{opts}->{force} ) {
>   die "can't migrate VM which uses local devices\n";
>   } else {
>   $self->log('info', "migrating VM which uses local devices");
> @@ -182,12 +184,16 @@ sub prepare {
>  my $vollist = PVE::QemuServer::get_vm_volumes($conf);
>  
>  my $need_activate = [];
> +my $unsharedcount = 0;
>  foreach my $volid (@$vollist) {
>   my ($sid, $volname) = PVE::Storage::parse_volume_id($volid, 1);
>  
>   # check if storage is available on both nodes
>   my $scfg = PVE::Storage::storage_check_node($self->{storecfg}, $sid);
> - PVE::Storage::storage_check_node($self->{storecfg}, $sid, 
> $self->{node});
> + my $targetsid = $sid;
> + $targetsid = $self->{opts}->{targetstorage} if 
> $self->{opts}->{targetstorage};
> +
> + PVE::Storage::storage_check_node($self->{storecfg}, $targetsid, 
> $self->{node});
>  
>   if ($scfg->{shared}) {
>   # PVE::Storage::activate_storage checks this for non-shared storages
> @@ -197,9 +203,12 @@ sub prepare {
>   } else {
>   # only activate if not shared
>   push @$need_activate, $volid;
> + $unsharedcount++;
>   }
>  }
>  
> +die "online storage migration don't support more than 1 local disk 
> currently" if $unsharedcount > 1;
> +
>  # activate volumes
>  PVE::Storage::activate_volumes($self->{storecfg}, $need_activate);
>  
> @@ -407,7 +416,7 @@ sub phase1 {
>  $conf->{lock} = 'migrate';
>  PVE::QemuConfig->write_config($vmid, $conf);
>  
> -sync_disks($self, $vmid);
> +sync_disks($self, $vmid) if !$self->{opts}->{targetstorage};
>  
>  };
>  
> @@ -452,7 +461,7 @@ sub phase2 {
>   $spice_ticket = $res->{ticket};
>  }
>  
> -push @$cmd , 'qm', 'start', $vmid, '--skiplock', '--migratedfrom', 
> $nodename;
> +push @$cmd , 'qm', 'start', $vmid, '--skiplock', '--migratedfrom', 
> $nodename, '--targetstorage', $self->{opts}->{targetstorage};
>  
>  # we use TCP only for unsecure migrations as TCP ssh forward tunnels 
> often
>  # did appeared to late (they are hard, if not impossible, to check for)
> @@ -472,6 +481,7 @@ sub phase2 {
>  }
>  
>  my $spice_port;
> +my $nbd_uri;
>  
>  # Note: We try to keep $spice_ticket secret (do not pass via command 
> line parameter)
>  # instead we pipe it through STDIN
> @@ -496,6 +506,13 @@ sub phase2 {
>  elsif ($line =~ m/^spice listens on port (\d+)$/) {
>   $spice_port = int($1);
>   }
> +elsif ($line =~ m/^storage migration listens on 
> nbd:(localhost|[\d\.]+|\[[\d\.:a-fA-F]+\]):(\d+):exportname=(\S+) 
> volume:(\S+)$/) {

considering some of the code looks like it's prepared 

[pve-devel] Applied: [PATCH 3/3] enable drive-mirror with iothread for qemu 2.7 v2

2016-10-17 Thread Wolfgang Bumiller
applied

On Mon, Oct 17, 2016 at 12:20:45PM +0200, Alexandre Derumier wrote:
> changelog : check running qemu binary version
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/QemuServer.pm | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index e4c385f..f42b733 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -5932,8 +5932,11 @@ sub clone_disk {
>   if (!$running || $snapname) {
>   qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, 
> $sparseinit);
>   } else {
> - #qemu 2.6
> - die "drive-mirror is not working currently when iothread is 
> enabled" if $drive->{iothread};
> +
> + my $kvmver = get_running_qemu_version ($vmid);
> + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) {
> + die "drive-mirror with iothread only works since qemu 2.7" if 
> $drive->{iothread};
> + }
>  
>   qemu_drive_mirror($vmid, $drivename, $newvolid, $newvmid, 
> $sparseinit);
>   }
> -- 
> 2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Applied: qemu2.7: cpu hotplug && hot-unplug v2

2016-10-17 Thread Wolfgang Bumiller
applied whole series

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/3] enable virtio-scsi iothread hot-unplug for qemu 2.7 v3

2016-10-17 Thread Alexandre DERUMIER
Ok, 
I have done more tests again, and It seem that I can trigger the bug.

Sometimes the virtio-scsi controller can't be unplug after some plug|unplug.

I can't reproduce 100%, sometimes it's after 3 hot|unplug, sometimes after 10 
hot|unplug.

I have also tried manual "device_del", and the controller is stuck.


So, better to disable it for now. (I'll try to contact qemu devs to see where 
is the problem)

- Mail original -
De: "aderumier" 
À: "Wolfgang Bumiller" 
Cc: "pve-devel" 
Envoyé: Lundi 17 Octobre 2016 13:31:37
Objet: Re: [pve-devel] [PATCH 2/3] enable virtio-scsi iothread hot-unplug for 
qemu 2.7 v3

>>What kind of setup did you test this with? 

debian jessie, kernel 3.16. 

I have tried to remove/readd multiple time a virtio-scsi disk, no error. 


with qemu 2.6, I had errors when removing then readd same drive. 

I'll do more tests with arch to see if I can reproduce 

- Mail original - 
De: "Wolfgang Bumiller"  
À: "aderumier"  
Cc: "pve-devel"  
Envoyé: Lundi 17 Octobre 2016 13:17:45 
Objet: Re: [pve-devel] [PATCH 2/3] enable virtio-scsi iothread hot-unplug for 
qemu 2.7 v3 

So, when testing this on a linux guest (4.7.6, Arch) I still get errors 
(and see kernel traces in the guest's dmesg) with the iothread flag... 
Removing "works" (it's removed, but the qemu command still errors and 
you get an error in the pve GUI) - re-adding doesn't work (always shows 
me kernel stack traces in dmesg). 

What kind of setup did you test this with? 

On Mon, Oct 17, 2016 at 12:20:44PM +0200, Alexandre Derumier wrote: 
> changelog: check current running qemu process 
> 
> Signed-off-by: Alexandre Derumier  
> --- 
> PVE/QemuServer.pm | 13 + 
> 1 file changed, 9 insertions(+), 4 deletions(-) 
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
> index 6376323..e4c385f 100644 
> --- a/PVE/QemuServer.pm 
> +++ b/PVE/QemuServer.pm 
> @@ -3516,9 +3516,11 @@ sub vm_deviceunplug { 
> 
> } elsif ($deviceid =~ m/^(scsi)(\d+)$/) { 
> 
> - #qemu 2.3 segfault on drive_del with virtioscsi + iothread 
> - my $device = parse_drive($deviceid, $conf->{$deviceid}); 
> - die "virtioscsi with iothread is not hot-unplugglable currently" if 
> $device->{iothread}; 
> + my $kvmver = get_running_qemu_version($vmid); 
> + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) { 
> + my $device = parse_drive($deviceid, $conf->{$deviceid}); 
> + die "virtioscsi with iothread is hot-unplugglable since qemu 2.7" if 
> $device->{iothread}; 
> + } 
> 
> qemu_devicedel($vmid, $deviceid); 
> qemu_drivedel($vmid, $deviceid); 
> @@ -3564,7 +3566,10 @@ sub qemu_iothread_add { 
> sub qemu_iothread_del { 
> my($conf, $vmid, $deviceid) = @_; 
> 
> - my $device = parse_drive($deviceid, $conf->{$deviceid}); 
> + my $drive = $deviceid; 
> + $drive =~ s/virtioscsi/scsi/; 
> + my $device = parse_drive($drive, $conf->{$drive}); 
> + 
> if ($device->{iothread}) { 
> my $iothreads = vm_iothreads_list($vmid); 
> qemu_objectdel($vmid, "iothread-$deviceid") if 
> $iothreads->{"iothread-$deviceid"}; 
> -- 
> 2.1.4 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/3] enable virtio-scsi iothread hot-unplug for qemu 2.7 v3

2016-10-17 Thread Alexandre DERUMIER
>>What kind of setup did you test this with?

debian jessie, kernel 3.16.

I have tried to remove/readd multiple time a virtio-scsi disk, no error.


with qemu 2.6, I had errors when removing then readd same drive.

I'll do more tests with arch to see if I can reproduce

- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Lundi 17 Octobre 2016 13:17:45
Objet: Re: [pve-devel] [PATCH 2/3] enable virtio-scsi iothread hot-unplug for 
qemu 2.7 v3

So, when testing this on a linux guest (4.7.6, Arch) I still get errors 
(and see kernel traces in the guest's dmesg) with the iothread flag... 
Removing "works" (it's removed, but the qemu command still errors and 
you get an error in the pve GUI) - re-adding doesn't work (always shows 
me kernel stack traces in dmesg). 

What kind of setup did you test this with? 

On Mon, Oct 17, 2016 at 12:20:44PM +0200, Alexandre Derumier wrote: 
> changelog: check current running qemu process 
> 
> Signed-off-by: Alexandre Derumier  
> --- 
> PVE/QemuServer.pm | 13 + 
> 1 file changed, 9 insertions(+), 4 deletions(-) 
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
> index 6376323..e4c385f 100644 
> --- a/PVE/QemuServer.pm 
> +++ b/PVE/QemuServer.pm 
> @@ -3516,9 +3516,11 @@ sub vm_deviceunplug { 
> 
> } elsif ($deviceid =~ m/^(scsi)(\d+)$/) { 
> 
> - #qemu 2.3 segfault on drive_del with virtioscsi + iothread 
> - my $device = parse_drive($deviceid, $conf->{$deviceid}); 
> - die "virtioscsi with iothread is not hot-unplugglable currently" if 
> $device->{iothread}; 
> + my $kvmver = get_running_qemu_version($vmid); 
> + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) { 
> + my $device = parse_drive($deviceid, $conf->{$deviceid}); 
> + die "virtioscsi with iothread is hot-unplugglable since qemu 2.7" if 
> $device->{iothread}; 
> + } 
> 
> qemu_devicedel($vmid, $deviceid); 
> qemu_drivedel($vmid, $deviceid); 
> @@ -3564,7 +3566,10 @@ sub qemu_iothread_add { 
> sub qemu_iothread_del { 
> my($conf, $vmid, $deviceid) = @_; 
> 
> - my $device = parse_drive($deviceid, $conf->{$deviceid}); 
> + my $drive = $deviceid; 
> + $drive =~ s/virtioscsi/scsi/; 
> + my $device = parse_drive($drive, $conf->{$drive}); 
> + 
> if ($device->{iothread}) { 
> my $iothreads = vm_iothreads_list($vmid); 
> qemu_objectdel($vmid, "iothread-$deviceid") if 
> $iothreads->{"iothread-$deviceid"}; 
> -- 
> 2.1.4 
> 
> ___ 
> pve-devel mailing list 
> pve-devel@pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] bibliography: add Ceph Cookbook

2016-10-17 Thread Fabian Grünbichler
---
 pve-bibliography.adoc | 5 +
 1 file changed, 5 insertions(+)

diff --git a/pve-bibliography.adoc b/pve-bibliography.adoc
index d721c3d..e1fc280 100644
--- a/pve-bibliography.adoc
+++ b/pve-bibliography.adoc
@@ -62,6 +62,11 @@ endif::manvolnum[]
   Packt Publishing, 2015.
   ISBN 978-1783985623
 
+- [[[Singh16]]] Karan Signh.
+  'Ceph Cookbook'
+  Packt Publishing, 2016.
+  ISBN 978-1784393502
+
 - [[[Mauerer08]]] Wolfgang Mauerer.
   'Professional Linux Kernel Architecture'.
   John Wiley & Sons, 2008.
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH 2/3] enable virtio-scsi iothread hot-unplug for qemu 2.7 v3

2016-10-17 Thread Wolfgang Bumiller
So, when testing this on a linux guest (4.7.6, Arch) I still get errors
(and see kernel traces in the guest's dmesg) with the iothread flag...
Removing "works" (it's removed, but the qemu command still errors and
you get an error in the pve GUI) - re-adding doesn't work (always shows
me kernel stack traces in dmesg).

What kind of setup did you test this with?

On Mon, Oct 17, 2016 at 12:20:44PM +0200, Alexandre Derumier wrote:
> changelog: check current running qemu process
> 
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/QemuServer.pm | 13 +
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 6376323..e4c385f 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -3516,9 +3516,11 @@ sub vm_deviceunplug {
>  
>  } elsif ($deviceid =~ m/^(scsi)(\d+)$/) {
>  
> - #qemu 2.3 segfault on drive_del with virtioscsi + iothread
> - my $device = parse_drive($deviceid, $conf->{$deviceid});
> - die "virtioscsi with iothread is not hot-unplugglable currently" if 
> $device->{iothread};
> + my $kvmver = get_running_qemu_version($vmid);
> + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) {
> + my $device = parse_drive($deviceid, $conf->{$deviceid});
> + die "virtioscsi with iothread is hot-unplugglable since qemu 2.7" 
> if $device->{iothread};
> + }
>  
>  qemu_devicedel($vmid, $deviceid);
>  qemu_drivedel($vmid, $deviceid);
> @@ -3564,7 +3566,10 @@ sub qemu_iothread_add {
>  sub qemu_iothread_del {
>  my($conf, $vmid, $deviceid) = @_;
>  
> -my $device = parse_drive($deviceid, $conf->{$deviceid});
> +my $drive = $deviceid;
> +$drive =~ s/virtioscsi/scsi/;
> +my $device = parse_drive($drive, $conf->{$drive});
> +
>  if ($device->{iothread}) {
>   my $iothreads = vm_iothreads_list($vmid);
>   qemu_objectdel($vmid, "iothread-$deviceid") if 
> $iothreads->{"iothread-$deviceid"};
> -- 
> 2.1.4
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] Document shutdown behaviour when qemu-guest-agent is running

2016-10-17 Thread Fabian Grünbichler
On Mon, Oct 17, 2016 at 11:38:15AM +0200, Emmanuel Kasper wrote:
> On 10/17/2016 11:05 AM, Fabian Grünbichler wrote:
> > On Mon, Oct 17, 2016 at 10:52:26AM +0200, Emmanuel Kasper wrote:
> >> ---
> >>  PVE/API2/Qemu.pm | 6 --
> >>  1 file changed, 4 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> >> index ad7a0c0..f64a77c 100644
> >> --- a/PVE/API2/Qemu.pm
> >> +++ b/PVE/API2/Qemu.pm
> >> @@ -1858,8 +1858,10 @@ __PACKAGE__->register_method({
> >>  method => 'POST',
> >>  protected => 1,
> >>  proxyto => 'node',
> >> -description => "Shutdown virtual machine. This is similar to pressing 
> >> the power button on a physical machine." .
> >> -  "This will send an ACPI event for the guest OS, which should then 
> >> proceed to a clean shutdown.",
> >> +description => "Shutdown virtual machine. The default behaviour is to 
> >> send an ACPI event for the guest OS, similar " .
> >> +"to pressing the power button on a physical machine. The guest OS 
> >> should then proceed to a clean shutdown. " .
> >> +"If the Qemu GuestAgent is configured and running in the VM, the 
> >> shutdown will be initiated by the guest OS himself, " .
> >> +"calling the OS ExWindowsEx() function on Windows guests, or the 
> >> 'shutdown -P' command on posix guests.",
> >>  permissions => {
> >>check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
> >>  },
> > 
> > I'd keep it shorter and OS agnostic:
> > "If the Qemu GuestAgent is configured and running in the VM, the regular 
> > shutdown mechanism of the guest OS itself is used instead." (or "is 
> > triggered"?)
> 
> I found writing "the regular shutdown mechanism of the guest OS itself
> is used instead." not specific enough.
> 
> One could say shutting down via ACPI is also using the regular shutdown
> mechanism of the OS, or am I missing something ?

yes/no/depends ;) it just triggers a (emulated) hardware event, which
can be handled by the OS in various ways. usually it should cause the OS
to do a regular shutdown, but it could also be ignored, or do whatever
else the OS was configured/written to do.

> 
> What about:
> 
> Shutdown virtual machine. The default behaviour is to send an ACPI event
> to the guest OS, similar " .
> "to pressing the power button on a physical machine. The guest OS
> should then proceed to a clean shutdown. " .
> "If the Qemu Guest Agent is configured and running in the VM, the
> shutdown will be initiated by the guest OS himself, " .
> "similar to entering a 'shutdown' command inside the VM.

sounds good (with s/himself/itself/)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/3] add get_running_qemu_version

2016-10-17 Thread Alexandre Derumier
return current running qemu process version

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 46d0403..6376323 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5961,6 +5961,13 @@ sub get_current_qemu_machine {
 return $current || $default || 'pc';
 }
 
+sub get_running_qemu_version {
+my ($vmid) = @_;
+my $cmd = { execute => 'query-version', arguments => {} };
+my $res = vm_qmp_command($vmid, $cmd);
+return "$res->{qemu}->{major}.$res->{qemu}->{minor}";
+}
+
 sub qemu_machine_feature_enabled {
 my ($machine, $kvmver, $version_major, $version_minor) = @_;
 
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/3] enable virtio-scsi iothread hot-unplug for qemu 2.7 v3

2016-10-17 Thread Alexandre Derumier
changelog: check current running qemu process

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6376323..e4c385f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3516,9 +3516,11 @@ sub vm_deviceunplug {
 
 } elsif ($deviceid =~ m/^(scsi)(\d+)$/) {
 
-   #qemu 2.3 segfault on drive_del with virtioscsi + iothread
-   my $device = parse_drive($deviceid, $conf->{$deviceid});
-   die "virtioscsi with iothread is not hot-unplugglable currently" if 
$device->{iothread};
+   my $kvmver = get_running_qemu_version($vmid);
+   if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) {
+   my $device = parse_drive($deviceid, $conf->{$deviceid});
+   die "virtioscsi with iothread is hot-unplugglable since qemu 2.7" 
if $device->{iothread};
+   }
 
 qemu_devicedel($vmid, $deviceid);
 qemu_drivedel($vmid, $deviceid);
@@ -3564,7 +3566,10 @@ sub qemu_iothread_add {
 sub qemu_iothread_del {
 my($conf, $vmid, $deviceid) = @_;
 
-my $device = parse_drive($deviceid, $conf->{$deviceid});
+my $drive = $deviceid;
+$drive =~ s/virtioscsi/scsi/;
+my $device = parse_drive($drive, $conf->{$drive});
+
 if ($device->{iothread}) {
my $iothreads = vm_iothreads_list($vmid);
qemu_objectdel($vmid, "iothread-$deviceid") if 
$iothreads->{"iothread-$deviceid"};
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] enable drive-mirror && virtio-scsi unplug with iothreads

2016-10-17 Thread Alexandre Derumier
changelog:

check the current running qemu version to enable them.
(We can use older machines version, only the current qemu version is needed)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 3/3] enable drive-mirror with iothread for qemu 2.7 v2

2016-10-17 Thread Alexandre Derumier
changelog : check running qemu binary version

Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e4c385f..f42b733 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5932,8 +5932,11 @@ sub clone_disk {
if (!$running || $snapname) {
qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, 
$sparseinit);
} else {
-   #qemu 2.6
-   die "drive-mirror is not working currently when iothread is 
enabled" if $drive->{iothread};
+
+   my $kvmver = get_running_qemu_version ($vmid);
+   if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) {
+   die "drive-mirror with iothread only works since qemu 2.7" if 
$drive->{iothread};
+   }
 
qemu_drive_mirror($vmid, $drivename, $newvolid, $newvmid, 
$sparseinit);
}
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 3/4] cpu hotplug : add cpu hot-unplug support

2016-10-17 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 30 --
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 186fae1..e3c2550 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3761,6 +3761,8 @@ sub qemu_usb_hotplug {
 sub qemu_cpu_hotplug {
 my ($vmid, $conf, $vcpus) = @_;
 
+my $machine_type = PVE::QemuServer::get_current_qemu_machine($vmid);
+
 my $sockets = 1;
 $sockets = $conf->{smp} if $conf->{smp}; # old style - no longer iused
 $sockets = $conf->{sockets} if  $conf->{sockets};
@@ -3773,8 +3775,32 @@ sub qemu_cpu_hotplug {
if $vcpus > $maxcpus;
 
 my $currentvcpus = $conf->{vcpus} || $maxcpus;
-die "online cpu unplug is not yet possible\n"
-   if $vcpus < $currentvcpus;
+
+if ($vcpus < $currentvcpus) { 
+
+   if (qemu_machine_feature_enabled ($machine_type, undef, 2, 7)) {
+
+   for (my $i = $currentvcpus; $i > $vcpus; $i--) {
+   qemu_devicedel($vmid, "cpu$i");
+   my $retry = 0;
+   my $currentrunningvcpus = undef;
+   while (1) {
+   $currentrunningvcpus = vm_mon_cmd($vmid, "query-cpus");
+   last if scalar(@{$currentrunningvcpus}) == $i-1;
+   raise_param_exc({ "cpu unplug" => "error unplug cpu$i" }) 
if $retry > 5;
+   $retry++;
+   sleep 1;
+   }
+   #update conf after each succesfull cpu unplug
+   $conf->{vcpus} = scalar(@{$currentrunningvcpus});
+   PVE::QemuConfig->write_config($vmid, $conf);
+   }
+   } else {
+   die "online cpu unplug is only possible since qemu 2.7\n"
+   }
+
+   return;
+}
 
 my $currentrunningvcpus = vm_mon_cmd($vmid, "query-cpus");
 die "vcpus in running vm is different than configuration\n"
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] qemu2.7: cpu hotplug && hot-unplug v2

2016-10-17 Thread Alexandre Derumier
changelog: remove kvm_version() call for hotplug|unplug.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 1/4] cpu hotplug : add print_cpu_device

2016-10-17 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f4bb4dd..3e069ea 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1674,6 +1674,28 @@ sub print_netdev_full {
 return $netdev;
 }
 
+
+sub print_cpu_device {
+my ($conf, $id) = @_;
+
+my $nokvm = defined($conf->{kvm}) && $conf->{kvm} == 0 ? 1 : 0;
+my $cpu = $nokvm ? "qemu64" : "kvm64";
+if (my $cputype = $conf->{cpu}) {
+   my $cpuconf = PVE::JSONSchema::parse_property_string($cpu_fmt, $cputype)
+   or die "Cannot parse cpu description: $cputype\n";
+   $cpu = $cpuconf->{cputype};
+}
+
+my $sockets = 1;
+$sockets = $conf->{sockets} if  $conf->{sockets};
+my $cores = $conf->{cores} || 1;
+
+my $current_core = ($id - 1) % $cores;
+my $current_socket = int(($id - $current_core)/$cores);
+
+return 
"$cpu-x86_64-cpu,id=cpu$id,socket-id=$current_socket,core-id=$current_core,thread-id=0";
+}
+
 sub drive_is_cdrom {
 my ($drive) = @_;
 
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 2/4] cpu hotplug : add coldplugged cpu to qemu command line

2016-10-17 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 3e069ea..186fae1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2975,8 +2975,18 @@ sub config_to_command {
 die "MAX $allowed_vcpus vcpus allowed per VM on this node\n"
if ($allowed_vcpus < $maxcpus);
 
-push @$cmd, '-smp', 
"$vcpus,sockets=$sockets,cores=$cores,maxcpus=$maxcpus";
+if($hotplug_features->{cpu} && qemu_machine_feature_enabled 
($machine_type, $kvmver, 2, 7)) {
 
+   push @$cmd, '-smp', "1,sockets=$sockets,cores=$cores,maxcpus=$maxcpus";
+for (my $i = 2; $i <= $vcpus; $i++)  {
+   my $cpustr = print_cpu_device($conf,$i);
+   push @$cmd, '-device', $cpustr;
+   }
+
+} else {
+
+   push @$cmd, '-smp', 
"$vcpus,sockets=$sockets,cores=$cores,maxcpus=$maxcpus";
+}
 push @$cmd, '-nodefaults';
 
 my $bootorder = $conf->{boot} || $confdesc->{boot}->{default};
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH 4/4] cpu hotplug : add new cpu hotplug method for qemu 2.7

2016-10-17 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 28 +---
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e3c2550..46d0403 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3776,7 +3776,7 @@ sub qemu_cpu_hotplug {
 
 my $currentvcpus = $conf->{vcpus} || $maxcpus;
 
-if ($vcpus < $currentvcpus) { 
+if ($vcpus < $currentvcpus) {
 
if (qemu_machine_feature_enabled ($machine_type, undef, 2, 7)) {
 
@@ -3806,8 +3806,30 @@ sub qemu_cpu_hotplug {
 die "vcpus in running vm is different than configuration\n"
if scalar(@{$currentrunningvcpus}) != $currentvcpus;
 
-for (my $i = $currentvcpus; $i < $vcpus; $i++) {
-   vm_mon_cmd($vmid, "cpu-add", id => int($i));
+if (qemu_machine_feature_enabled ($machine_type, undef, 2, 7)) {
+
+   for (my $i = $currentvcpus+1; $i <= $vcpus; $i++) {
+   my $cpustr = print_cpu_device($conf, $i);
+   qemu_deviceadd($vmid, $cpustr);
+
+   my $retry = 0;
+   my $currentrunningvcpus = undef;
+   while (1) {
+   $currentrunningvcpus = vm_mon_cmd($vmid, "query-cpus");
+   last if scalar(@{$currentrunningvcpus}) == $i;
+   raise_param_exc({ "cpu hotplug" => "error hotplug cpu$i" }) if 
$retry > 10;
+   sleep 1;
+   $retry++;
+   }
+#update conf after each succesfull cpu hotplug
+   $conf->{vcpus} = scalar(@{$currentrunningvcpus});
+   PVE::QemuConfig->write_config($vmid, $conf);
+   }
+} else {
+
+   for (my $i = $currentvcpus; $i < $vcpus; $i++) {
+   vm_mon_cmd($vmid, "cpu-add", id => int($i));
+   }
 }
 }
 
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7

2016-10-17 Thread Alexandre DERUMIER
>>for iothread, I think query-machines will do the job. 
>>We don't care about machine version, only the running qemu binary version is 
>>needed. 

I mean qmp "query-version" not "query-machines"


- Mail original -
De: "aderumier" 
À: "Wolfgang Bumiller" 
Cc: "pve-devel" 
Envoyé: Lundi 17 Octobre 2016 11:33:44
Objet: Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7

>>But we actually have 3 states here: kvm binary version (not interesting 
>>when hotpluggin), machine version (only interesting when migrating), 
>>running qemu version (probably the most important). 

>>The machine part (query-machines) would give us the qemu version the 
>>VM was originally started with before migrations happened, but do all 3 
>>patches/series depend on this version (ie. the pc-i440fx machine 
>>version), or the actually running qemu? I'm pretty sure iothread issues 
>>are gone with an updated qemu regardless of the machine or what qemu we 
>>currently have in /bin if it's not the running one (kvm_user_version()). 
>>The latter is only ever interesting when starting a new VM as far as I 
>>can tell, whereas for hotplugging and migration the machine and 
>>running-version matter. (The latter probably much more than the former). 

yes, I thinked about that too this weekend. 

for iothread, I think query-machines will do the job. 
We don't care about machine version, only the running qemu binary version is 
needed. 


For cpu hotplug, only machine version is needed.(I think I can remove the extra 
kvm_version() call) 


I'll update my patches 



- Mail original - 
De: "Wolfgang Bumiller"  
À: "aderumier"  
Cc: "pve-devel"  
Envoyé: Lundi 17 Octobre 2016 11:21:10 
Objet: Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7 

On Mon, Oct 17, 2016 at 10:47:56AM +0200, Wolfgang Bumiller wrote: 
> On Thu, Oct 13, 2016 at 12:00:42PM +0200, Alexandre Derumier wrote: 
> > Signed-off-by: Alexandre Derumier  
> > --- 
> > PVE/QemuServer.pm | 7 +-- 
> > 1 file changed, 5 insertions(+), 2 deletions(-) 
> > 
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
> > index 05edd7a..ec8df94 100644 
> > --- a/PVE/QemuServer.pm 
> > +++ b/PVE/QemuServer.pm 
> > @@ -5933,8 +5933,11 @@ sub clone_disk { 
> > if (!$running || $snapname) { 
> > qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, $sparseinit); 
> > } else { 
> > - #qemu 2.6 
> > - die "drive-mirror is not working currently when iothread is enabled" if 
> > $drive->{iothread}; 
> > + 
> > + my $kvmver = kvm_user_version(); 
> > + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) { 
> 
> Considering this is in the path where $running is true, I think this 
> should include the get_current_qemu_machine() parameter? 

Actually, the same for the cpu hotplug and virtio-scsi iothread patches. 
In the cpuhotplug v1 series you said: 
> qemu_machine_feature_enabled() return current qemu process or if machine 
> version is forced (live migration for example), 
> it's return the machine version. 

But we actually have 3 states here: kvm binary version (not interesting 
when hotpluggin), machine version (only interesting when migrating), 
running qemu version (probably the most important). 

The machine part (query-machines) would give us the qemu version the 
VM was originally started with before migrations happened, but do all 3 
patches/series depend on this version (ie. the pc-i440fx machine 
version), or the actually running qemu? I'm pretty sure iothread issues 
are gone with an updated qemu regardless of the machine or what qemu we 
currently have in /bin if it's not the running one (kvm_user_version()). 
The latter is only ever interesting when starting a new VM as far as I 
can tell, whereas for hotplugging and migration the machine and 
running-version matter. (The latter probably much more than the former). 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7

2016-10-17 Thread Wolfgang Bumiller
On Mon, Oct 17, 2016 at 11:33:44AM +0200, Alexandre DERUMIER wrote:
> >>But we actually have 3 states here: kvm binary version (not interesting
> >>when hotpluggin), machine version (only interesting when migrating),
> >>running qemu version (probably the most important).
> 
> >>The machine part (query-machines) would give us the qemu version the
> >>VM was originally started with before migrations happened, but do all 3
> >>patches/series depend on this version (ie. the pc-i440fx machine
> >>version), or the actually running qemu? I'm pretty sure iothread issues
> >>are gone with an updated qemu regardless of the machine or what qemu we
> >>currently have in /bin if it's not the running one (kvm_user_version()).
> >>The latter is only ever interesting when starting a new VM as far as I
> >>can tell, whereas for hotplugging and migration the machine and
> >>running-version matter. (The latter probably much more than the former).
> 
> yes, I thinked about that too this weekend.
> 
> for iothread, I think query-machines will do the job. 
> We don't care about machine version, only the running qemu binary version is 
> needed.

But wouldn't that be query-version? If we migrate from 2.6 to 2.7 then
query-machine would give us pc-i440fx-2.6, no?

> 
> 
> For cpu hotplug, only machine version is needed.(I think I can remove the 
> extra kvm_version() call)
> 
> 
> I'll update my patches
> 
> 
> 
> - Mail original -
> De: "Wolfgang Bumiller" 
> À: "aderumier" 
> Cc: "pve-devel" 
> Envoyé: Lundi 17 Octobre 2016 11:21:10
> Objet: Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7
> 
> On Mon, Oct 17, 2016 at 10:47:56AM +0200, Wolfgang Bumiller wrote: 
> > On Thu, Oct 13, 2016 at 12:00:42PM +0200, Alexandre Derumier wrote: 
> > > Signed-off-by: Alexandre Derumier  
> > > --- 
> > > PVE/QemuServer.pm | 7 +-- 
> > > 1 file changed, 5 insertions(+), 2 deletions(-) 
> > > 
> > > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
> > > index 05edd7a..ec8df94 100644 
> > > --- a/PVE/QemuServer.pm 
> > > +++ b/PVE/QemuServer.pm 
> > > @@ -5933,8 +5933,11 @@ sub clone_disk { 
> > > if (!$running || $snapname) { 
> > > qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, 
> > > $sparseinit); 
> > > } else { 
> > > - #qemu 2.6 
> > > - die "drive-mirror is not working currently when iothread is enabled" if 
> > > $drive->{iothread}; 
> > > + 
> > > + my $kvmver = kvm_user_version(); 
> > > + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) { 
> > 
> > Considering this is in the path where $running is true, I think this 
> > should include the get_current_qemu_machine() parameter? 
> 
> Actually, the same for the cpu hotplug and virtio-scsi iothread patches. 
> In the cpuhotplug v1 series you said: 
> > qemu_machine_feature_enabled() return current qemu process or if machine 
> > version is forced (live migration for example), 
> > it's return the machine version. 
> 
> But we actually have 3 states here: kvm binary version (not interesting 
> when hotpluggin), machine version (only interesting when migrating), 
> running qemu version (probably the most important). 
> 
> The machine part (query-machines) would give us the qemu version the 
> VM was originally started with before migrations happened, but do all 3 
> patches/series depend on this version (ie. the pc-i440fx machine 
> version), or the actually running qemu? I'm pretty sure iothread issues 
> are gone with an updated qemu regardless of the machine or what qemu we 
> currently have in /bin if it's not the running one (kvm_user_version()). 
> The latter is only ever interesting when starting a new VM as far as I 
> can tell, whereas for hotplugging and migration the machine and 
> running-version matter. (The latter probably much more than the former). 
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] Document shutdown behaviour when qemu-guest-agent is running

2016-10-17 Thread Emmanuel Kasper
On 10/17/2016 11:05 AM, Fabian Grünbichler wrote:
> On Mon, Oct 17, 2016 at 10:52:26AM +0200, Emmanuel Kasper wrote:
>> ---
>>  PVE/API2/Qemu.pm | 6 --
>>  1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index ad7a0c0..f64a77c 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -1858,8 +1858,10 @@ __PACKAGE__->register_method({
>>  method => 'POST',
>>  protected => 1,
>>  proxyto => 'node',
>> -description => "Shutdown virtual machine. This is similar to pressing 
>> the power button on a physical machine." .
>> -"This will send an ACPI event for the guest OS, which should then 
>> proceed to a clean shutdown.",
>> +description => "Shutdown virtual machine. The default behaviour is to 
>> send an ACPI event for the guest OS, similar " .
>> +"to pressing the power button on a physical machine. The guest OS 
>> should then proceed to a clean shutdown. " .
>> +"If the Qemu GuestAgent is configured and running in the VM, the 
>> shutdown will be initiated by the guest OS himself, " .
>> +"calling the OS ExWindowsEx() function on Windows guests, or the 
>> 'shutdown -P' command on posix guests.",
>>  permissions => {
>>  check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
>>  },
> 
> I'd keep it shorter and OS agnostic:
> "If the Qemu GuestAgent is configured and running in the VM, the regular 
> shutdown mechanism of the guest OS itself is used instead." (or "is 
> triggered"?)

I found writing "the regular shutdown mechanism of the guest OS itself
is used instead." not specific enough.

One could say shutting down via ACPI is also using the regular shutdown
mechanism of the OS, or am I missing something ?

What about:

Shutdown virtual machine. The default behaviour is to send an ACPI event
to the guest OS, similar " .
"to pressing the power button on a physical machine. The guest OS
should then proceed to a clean shutdown. " .
"If the Qemu Guest Agent is configured and running in the VM, the
shutdown will be initiated by the guest OS himself, " .
"similar to entering a 'shutdown' command inside the VM.




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7

2016-10-17 Thread Alexandre DERUMIER
>>But we actually have 3 states here: kvm binary version (not interesting
>>when hotpluggin), machine version (only interesting when migrating),
>>running qemu version (probably the most important).

>>The machine part (query-machines) would give us the qemu version the
>>VM was originally started with before migrations happened, but do all 3
>>patches/series depend on this version (ie. the pc-i440fx machine
>>version), or the actually running qemu? I'm pretty sure iothread issues
>>are gone with an updated qemu regardless of the machine or what qemu we
>>currently have in /bin if it's not the running one (kvm_user_version()).
>>The latter is only ever interesting when starting a new VM as far as I
>>can tell, whereas for hotplugging and migration the machine and
>>running-version matter. (The latter probably much more than the former).

yes, I thinked about that too this weekend.

for iothread, I think query-machines will do the job. 
We don't care about machine version, only the running qemu binary version is 
needed.


For cpu hotplug, only machine version is needed.(I think I can remove the extra 
kvm_version() call)


I'll update my patches



- Mail original -
De: "Wolfgang Bumiller" 
À: "aderumier" 
Cc: "pve-devel" 
Envoyé: Lundi 17 Octobre 2016 11:21:10
Objet: Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7

On Mon, Oct 17, 2016 at 10:47:56AM +0200, Wolfgang Bumiller wrote: 
> On Thu, Oct 13, 2016 at 12:00:42PM +0200, Alexandre Derumier wrote: 
> > Signed-off-by: Alexandre Derumier  
> > --- 
> > PVE/QemuServer.pm | 7 +-- 
> > 1 file changed, 5 insertions(+), 2 deletions(-) 
> > 
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm 
> > index 05edd7a..ec8df94 100644 
> > --- a/PVE/QemuServer.pm 
> > +++ b/PVE/QemuServer.pm 
> > @@ -5933,8 +5933,11 @@ sub clone_disk { 
> > if (!$running || $snapname) { 
> > qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, $sparseinit); 
> > } else { 
> > - #qemu 2.6 
> > - die "drive-mirror is not working currently when iothread is enabled" if 
> > $drive->{iothread}; 
> > + 
> > + my $kvmver = kvm_user_version(); 
> > + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) { 
> 
> Considering this is in the path where $running is true, I think this 
> should include the get_current_qemu_machine() parameter? 

Actually, the same for the cpu hotplug and virtio-scsi iothread patches. 
In the cpuhotplug v1 series you said: 
> qemu_machine_feature_enabled() return current qemu process or if machine 
> version is forced (live migration for example), 
> it's return the machine version. 

But we actually have 3 states here: kvm binary version (not interesting 
when hotpluggin), machine version (only interesting when migrating), 
running qemu version (probably the most important). 

The machine part (query-machines) would give us the qemu version the 
VM was originally started with before migrations happened, but do all 3 
patches/series depend on this version (ie. the pc-i440fx machine 
version), or the actually running qemu? I'm pretty sure iothread issues 
are gone with an updated qemu regardless of the machine or what qemu we 
currently have in /bin if it's not the running one (kvm_user_version()). 
The latter is only ever interesting when starting a new VM as far as I 
can tell, whereas for hotplugging and migration the machine and 
running-version matter. (The latter probably much more than the former). 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7

2016-10-17 Thread Wolfgang Bumiller
On Mon, Oct 17, 2016 at 10:47:56AM +0200, Wolfgang Bumiller wrote:
> On Thu, Oct 13, 2016 at 12:00:42PM +0200, Alexandre Derumier wrote:
> > Signed-off-by: Alexandre Derumier 
> > ---
> >  PVE/QemuServer.pm | 7 +--
> >  1 file changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> > index 05edd7a..ec8df94 100644
> > --- a/PVE/QemuServer.pm
> > +++ b/PVE/QemuServer.pm
> > @@ -5933,8 +5933,11 @@ sub clone_disk {
> > if (!$running || $snapname) {
> > qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, 
> > $sparseinit);
> > } else {
> > -   #qemu 2.6
> > -   die "drive-mirror is not working currently when iothread is 
> > enabled" if $drive->{iothread};
> > +
> > +   my $kvmver = kvm_user_version();
> > +   if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) {
> 
> Considering this is in the path where $running is true, I think this
> should include the get_current_qemu_machine() parameter?

Actually, the same for the cpu hotplug and virtio-scsi iothread patches.
In the cpuhotplug v1 series you said:
> qemu_machine_feature_enabled() return current qemu process or if machine 
> version is forced (live migration for example),
> it's return the machine version.

But we actually have 3 states here: kvm binary version (not interesting
when hotpluggin), machine version (only interesting when migrating),
running qemu version (probably the most important).

The machine part (query-machines) would give us the qemu version the
VM was originally started with before migrations happened, but do all 3
patches/series depend on this version (ie. the pc-i440fx machine
version), or the actually running qemu? I'm pretty sure iothread issues
are gone with an updated qemu regardless of the machine or what qemu we
currently have in /bin if it's not the running one (kvm_user_version()).
The latter is only ever interesting when starting a new VM as far as I
can tell, whereas for hotplugging and migration the machine and
running-version matter. (The latter probably much more than the former).

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] Document shutdown behaviour when qemu-guest-agent is running

2016-10-17 Thread Fabian Grünbichler
On Mon, Oct 17, 2016 at 10:52:26AM +0200, Emmanuel Kasper wrote:
> ---
>  PVE/API2/Qemu.pm | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index ad7a0c0..f64a77c 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -1858,8 +1858,10 @@ __PACKAGE__->register_method({
>  method => 'POST',
>  protected => 1,
>  proxyto => 'node',
> -description => "Shutdown virtual machine. This is similar to pressing 
> the power button on a physical machine." .
> - "This will send an ACPI event for the guest OS, which should then 
> proceed to a clean shutdown.",
> +description => "Shutdown virtual machine. The default behaviour is to 
> send an ACPI event for the guest OS, similar " .
> +"to pressing the power button on a physical machine. The guest OS should 
> then proceed to a clean shutdown. " .
> +"If the Qemu GuestAgent is configured and running in the VM, the 
> shutdown will be initiated by the guest OS himself, " .
> +"calling the OS ExWindowsEx() function on Windows guests, or the 
> 'shutdown -P' command on posix guests.",
>  permissions => {
>   check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
>  },

I'd keep it shorter and OS agnostic:
"If the Qemu GuestAgent is configured and running in the VM, the regular 
shutdown mechanism of the guest OS itself is used instead." (or "is triggered"?)

Also, I'd change "send an ACPI event for the guest OS" to "send an ACPI
event to the guest OS".

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] Add a note referencing the qm man page for the shutdown internals

2016-10-17 Thread Emmanuel Kasper
---
Not 100% sure if these shutdown explaination belong to asciidoc or to the 
generated
manu.
 qm.adoc | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/qm.adoc b/qm.adoc
index 3624e2f..5d711f7 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -464,6 +464,9 @@ start after those where the parameter is set, and this 
parameter only
 makes sense between the machines running locally on a host, and not
 cluster-wide.
 
+NOTE: The `qm` manual pages explains in details how the shutdown command is 
sent
+internally to the VM.
+
 Managing Virtual Machines with `qm`
 
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] Document shutdown behaviour when qemu-guest-agent is running

2016-10-17 Thread Emmanuel Kasper
---
 PVE/API2/Qemu.pm | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ad7a0c0..f64a77c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1858,8 +1858,10 @@ __PACKAGE__->register_method({
 method => 'POST',
 protected => 1,
 proxyto => 'node',
-description => "Shutdown virtual machine. This is similar to pressing the 
power button on a physical machine." .
-   "This will send an ACPI event for the guest OS, which should then 
proceed to a clean shutdown.",
+description => "Shutdown virtual machine. The default behaviour is to send 
an ACPI event for the guest OS, similar " .
+"to pressing the power button on a physical machine. The guest OS should 
then proceed to a clean shutdown. " .
+"If the Qemu GuestAgent is configured and running in the VM, the shutdown 
will be initiated by the guest OS himself, " .
+"calling the OS ExWindowsEx() function on Windows guests, or the 'shutdown 
-P' command on posix guests.",
 permissions => {
check => ['perm', '/vms/{vmid}', [ 'VM.PowerMgmt' ]],
 },
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] enable drive-mirror with iothread for qemu 2.7

2016-10-17 Thread Wolfgang Bumiller
On Thu, Oct 13, 2016 at 12:00:42PM +0200, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/QemuServer.pm | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 05edd7a..ec8df94 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -5933,8 +5933,11 @@ sub clone_disk {
>   if (!$running || $snapname) {
>   qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, 
> $sparseinit);
>   } else {
> - #qemu 2.6
> - die "drive-mirror is not working currently when iothread is 
> enabled" if $drive->{iothread};
> +
> + my $kvmver = kvm_user_version();
> + if (!qemu_machine_feature_enabled (undef, $kvmver, 2, 7)) {

Considering this is in the path where $running is true, I think this
should include the get_current_qemu_machine() parameter?

> + die "drive-mirror with iothread only works since qemu 2.7" if 
> $drive->{iothread};
> + }
>  
>   qemu_drive_mirror($vmid, $drivename, $newvolid, $newvmid, 
> $sparseinit);
>   }
> -- 
> 2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 5/7] add sas regression tests

2016-10-17 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 test/disk_tests/sas/disklist|  1 +
 test/disk_tests/sas/disklist_expected.json  | 17 
 test/disk_tests/sas/sda/device/model|  1 +
 test/disk_tests/sas/sda/device/vendor   |  1 +
 test/disk_tests/sas/sda/queue/rotational|  1 +
 test/disk_tests/sas/sda/size|  1 +
 test/disk_tests/sas/sda_smart   | 20 +++
 test/disk_tests/sas/sda_smart_expected.json |  5 +
 test/disk_tests/sas/sda_udevadm | 31 +
 9 files changed, 78 insertions(+)
 create mode 100644 test/disk_tests/sas/disklist
 create mode 100644 test/disk_tests/sas/disklist_expected.json
 create mode 100644 test/disk_tests/sas/sda/device/model
 create mode 100644 test/disk_tests/sas/sda/device/vendor
 create mode 100644 test/disk_tests/sas/sda/queue/rotational
 create mode 100644 test/disk_tests/sas/sda/size
 create mode 100644 test/disk_tests/sas/sda_smart
 create mode 100644 test/disk_tests/sas/sda_smart_expected.json
 create mode 100644 test/disk_tests/sas/sda_udevadm

diff --git a/test/disk_tests/sas/disklist b/test/disk_tests/sas/disklist
new file mode 100644
index 000..9191c61
--- /dev/null
+++ b/test/disk_tests/sas/disklist
@@ -0,0 +1 @@
+sda
diff --git a/test/disk_tests/sas/disklist_expected.json 
b/test/disk_tests/sas/disklist_expected.json
new file mode 100644
index 000..77f7d33
--- /dev/null
+++ b/test/disk_tests/sas/disklist_expected.json
@@ -0,0 +1,17 @@
+{
+"sda" : {
+   "gpt" : 1,
+   "devpath" : "/dev/sda",
+   "type" : "unknown",
+   "model" : "MODEL1",
+   "health" : "UNKNOWN",
+   "osdid" : -1,
+   "journals" : 0,
+   "wwn" : "0x",
+   "vendor" : "VENDOR1",
+   "rpm" : -1,
+   "size" : 512,
+   "serial" : "SER2",
+   "wearout" : "N/A"
+}
+}
diff --git a/test/disk_tests/sas/sda/device/model 
b/test/disk_tests/sas/sda/device/model
new file mode 100644
index 000..6b69674
--- /dev/null
+++ b/test/disk_tests/sas/sda/device/model
@@ -0,0 +1 @@
+MODEL1
diff --git a/test/disk_tests/sas/sda/device/vendor 
b/test/disk_tests/sas/sda/device/vendor
new file mode 100644
index 000..a7894eb
--- /dev/null
+++ b/test/disk_tests/sas/sda/device/vendor
@@ -0,0 +1 @@
+VENDOR1
diff --git a/test/disk_tests/sas/sda/queue/rotational 
b/test/disk_tests/sas/sda/queue/rotational
new file mode 100644
index 000..d00491f
--- /dev/null
+++ b/test/disk_tests/sas/sda/queue/rotational
@@ -0,0 +1 @@
+1
diff --git a/test/disk_tests/sas/sda/size b/test/disk_tests/sas/sda/size
new file mode 100644
index 000..5caff40
--- /dev/null
+++ b/test/disk_tests/sas/sda/size
@@ -0,0 +1 @@
+1
diff --git a/test/disk_tests/sas/sda_smart b/test/disk_tests/sas/sda_smart
new file mode 100644
index 000..856af39
--- /dev/null
+++ b/test/disk_tests/sas/sda_smart
@@ -0,0 +1,20 @@
+=== START OF READ SMART DATA SECTION ===
+SMART Health Status: OK
+
+Percentage used endurance indicator: 0%
+Current Drive Temperature: 20 C
+Drive Trip Temperature:70 C
+
+Manufactured in week 47 of year 2012
+Specified cycle count over device lifetime:  0
+Accumulated start-stop cycles:  0
+Specified load-unload count over device lifetime:  0
+Accumulated load-unload cycles:  0
+Elements in grown defect list: 0
+
+Vendor (Seagate) cache information
+  Blocks sent to initiator = 1286675833552896
+
+Vendor (Seagate/Hitachi) factory information
+  number of hours powered up = 7127.12
+  number of minutes until next internal SMART test = 0
diff --git a/test/disk_tests/sas/sda_smart_expected.json 
b/test/disk_tests/sas/sda_smart_expected.json
new file mode 100644
index 000..e096c8b
--- /dev/null
+++ b/test/disk_tests/sas/sda_smart_expected.json
@@ -0,0 +1,5 @@
+{
+"health" : "OK",
+"text" : "\nPercentage used endurance indicator: 0%\nCurrent Drive 
Temperature: 20 C\nDrive Trip Temperature:70 C\n\nManufactured in 
week 47 of year 2012\nSpecified cycle count over device lifetime:  
0\nAccumulated start-stop cycles:  0\nSpecified load-unload count over device 
lifetime:  0\nAccumulated load-unload cycles:  0\nElements in grown defect 
list: 0\n\nVendor (Seagate) cache information\n  Blocks sent to initiator = 
1286675833552896\n\nVendor (Seagate/Hitachi) factory information\n  number of 
hours powered up = 7127.12\n  number of minutes until next internal SMART test 
= 0\n",
+"type" : "text"
+}
diff --git a/test/disk_tests/sas/sda_udevadm b/test/disk_tests/sas/sda_udevadm
new file mode 100644
index 000..ac0744d
--- /dev/null
+++ b/test/disk_tests/sas/sda_udevadm
@@ -0,0 +1,31 @@
+P: 
/devices/pci:00/:00:03.0/:02:00.0/host4/port-4:0/end_device-4:0/target4:0:0/4:0:0:0/block/sda
+N: sda
+S: disk/by-id/scsi-0
+S: disk/by-id/wwn-0x
+S: disk/by-path/pci-:02:00.0-sas-0x-lun-0
+E: DEVLINKS=/dev/disk/by-id/scsi-0 

[pve-devel] [PATCH storage 7/7] add disk usages regression test

2016-10-17 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 test/disk_tests/usages/disklist   |  6 ++
 test/disk_tests/usages/disklist_expected.json | 97 +++
 test/disk_tests/usages/mounts |  2 +
 test/disk_tests/usages/partlist   |  2 +
 test/disk_tests/usages/pvs|  1 +
 test/disk_tests/usages/sda/device/vendor  |  1 +
 test/disk_tests/usages/sda/queue/rotational   |  1 +
 test/disk_tests/usages/sda/size   |  1 +
 test/disk_tests/usages/sda_udevadm| 12 
 test/disk_tests/usages/sdb/device/vendor  |  1 +
 test/disk_tests/usages/sdb/queue/rotational   |  1 +
 test/disk_tests/usages/sdb/size   |  1 +
 test/disk_tests/usages/sdb_udevadm| 12 
 test/disk_tests/usages/sdc/device/vendor  |  1 +
 test/disk_tests/usages/sdc/queue/rotational   |  1 +
 test/disk_tests/usages/sdc/size   |  1 +
 test/disk_tests/usages/sdc_udevadm| 12 
 test/disk_tests/usages/sdd/device/vendor  |  1 +
 test/disk_tests/usages/sdd/queue/rotational   |  1 +
 test/disk_tests/usages/sdd/size   |  1 +
 test/disk_tests/usages/sdd_udevadm| 12 
 test/disk_tests/usages/sde/device/vendor  |  1 +
 test/disk_tests/usages/sde/queue/rotational   |  1 +
 test/disk_tests/usages/sde/size   |  1 +
 test/disk_tests/usages/sde_udevadm| 12 
 test/disk_tests/usages/sdf/device/vendor  |  1 +
 test/disk_tests/usages/sdf/queue/rotational   |  1 +
 test/disk_tests/usages/sdf/size   |  1 +
 test/disk_tests/usages/sdf_udevadm| 12 
 test/disk_tests/usages/zpool  |  6 ++
 30 files changed, 204 insertions(+)
 create mode 100644 test/disk_tests/usages/disklist
 create mode 100644 test/disk_tests/usages/disklist_expected.json
 create mode 100644 test/disk_tests/usages/mounts
 create mode 100644 test/disk_tests/usages/partlist
 create mode 100644 test/disk_tests/usages/pvs
 create mode 100644 test/disk_tests/usages/sda/device/vendor
 create mode 100644 test/disk_tests/usages/sda/queue/rotational
 create mode 100644 test/disk_tests/usages/sda/size
 create mode 100644 test/disk_tests/usages/sda_udevadm
 create mode 100644 test/disk_tests/usages/sdb/device/vendor
 create mode 100644 test/disk_tests/usages/sdb/queue/rotational
 create mode 100644 test/disk_tests/usages/sdb/size
 create mode 100644 test/disk_tests/usages/sdb_udevadm
 create mode 100644 test/disk_tests/usages/sdc/device/vendor
 create mode 100644 test/disk_tests/usages/sdc/queue/rotational
 create mode 100644 test/disk_tests/usages/sdc/size
 create mode 100644 test/disk_tests/usages/sdc_udevadm
 create mode 100644 test/disk_tests/usages/sdd/device/vendor
 create mode 100644 test/disk_tests/usages/sdd/queue/rotational
 create mode 100644 test/disk_tests/usages/sdd/size
 create mode 100644 test/disk_tests/usages/sdd_udevadm
 create mode 100644 test/disk_tests/usages/sde/device/vendor
 create mode 100644 test/disk_tests/usages/sde/queue/rotational
 create mode 100644 test/disk_tests/usages/sde/size
 create mode 100644 test/disk_tests/usages/sde_udevadm
 create mode 100644 test/disk_tests/usages/sdf/device/vendor
 create mode 100644 test/disk_tests/usages/sdf/queue/rotational
 create mode 100644 test/disk_tests/usages/sdf/size
 create mode 100644 test/disk_tests/usages/sdf_udevadm
 create mode 100644 test/disk_tests/usages/zpool

diff --git a/test/disk_tests/usages/disklist b/test/disk_tests/usages/disklist
new file mode 100644
index 000..b5daaf1
--- /dev/null
+++ b/test/disk_tests/usages/disklist
@@ -0,0 +1,6 @@
+sda
+sdb
+sdc
+sdd
+sde
+sdf
diff --git a/test/disk_tests/usages/disklist_expected.json 
b/test/disk_tests/usages/disklist_expected.json
new file mode 100644
index 000..3d1241c
--- /dev/null
+++ b/test/disk_tests/usages/disklist_expected.json
@@ -0,0 +1,97 @@
+{
+"sdf" : {
+   "gpt" : 1,
+   "rpm" : 0,
+   "size" : 1536000,
+   "type" : "hdd",
+   "osdid" : "444",
+   "health" : "UNKNOWN",
+   "model" : "MODEL1",
+   "used" : "mounted",
+   "journals" : 0,
+   "wearout" : "N/A",
+   "wwn" : "0x",
+   "devpath" : "/dev/sdf",
+   "vendor" : "ATA",
+   "serial" : "SERIAL1"
+},
+"sde" : {
+   "wwn" : "0x",
+   "devpath" : "/dev/sde",
+   "serial" : "SERIAL1",
+   "vendor" : "ATA",
+   "health" : "UNKNOWN",
+   "rpm" : 0,
+   "size" : 1536000,
+   "gpt" : 1,
+   "osdid" : -1,
+   "type" : "hdd",
+   "model" : "MODEL1",
+   "used" : "Device Mapper",
+   "journals" : 0,
+   "wearout" : "N/A"
+},
+"sdb" : {
+   "serial" : "SERIAL1",
+   "vendor" : "ATA",
+   "wwn" : "0x",
+   "devpath" : "/dev/sdb",
+   "model" : "MODEL1",
+   "used" : "LVM",
+   "journals" : 0,
+   "wearout" : "N/A",
+   "health" : 

[pve-devel] [PATCH storage 3/7] add ssd and smart regression tests

2016-10-17 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 test/disk_tests/ssd_smart/disklist|   5 +
 test/disk_tests/ssd_smart/disklist_expected.json  |  77 +++
 test/disk_tests/ssd_smart/sda/device/vendor   |   1 +
 test/disk_tests/ssd_smart/sda/queue/rotational|   1 +
 test/disk_tests/ssd_smart/sda/size|   1 +
 test/disk_tests/ssd_smart/sda_smart   |  39 
 test/disk_tests/ssd_smart/sda_smart_expected.json | 236 
 test/disk_tests/ssd_smart/sda_udevadm |  11 +
 test/disk_tests/ssd_smart/sdb/device/vendor   |   1 +
 test/disk_tests/ssd_smart/sdb/queue/rotational|   1 +
 test/disk_tests/ssd_smart/sdb/size|   1 +
 test/disk_tests/ssd_smart/sdb_smart   |  41 
 test/disk_tests/ssd_smart/sdb_smart_expected.json | 256 ++
 test/disk_tests/ssd_smart/sdb_udevadm |  12 +
 test/disk_tests/ssd_smart/sdc/device/vendor   |   1 +
 test/disk_tests/ssd_smart/sdc/queue/rotational|   1 +
 test/disk_tests/ssd_smart/sdc/size|   1 +
 test/disk_tests/ssd_smart/sdc_smart   |  16 ++
 test/disk_tests/ssd_smart/sdc_smart_expected.json |  16 ++
 test/disk_tests/ssd_smart/sdc_udevadm |  11 +
 test/disk_tests/ssd_smart/sdd/device/vendor   |   1 +
 test/disk_tests/ssd_smart/sdd/queue/rotational|   1 +
 test/disk_tests/ssd_smart/sdd/size|   1 +
 test/disk_tests/ssd_smart/sdd_smart   |  40 
 test/disk_tests/ssd_smart/sdd_smart_expected.json |  16 ++
 test/disk_tests/ssd_smart/sdd_udevadm |  11 +
 test/disk_tests/ssd_smart/sde/device/vendor   |   1 +
 test/disk_tests/ssd_smart/sde/queue/rotational|   1 +
 test/disk_tests/ssd_smart/sde/size|   1 +
 test/disk_tests/ssd_smart/sde_smart   |  19 ++
 test/disk_tests/ssd_smart/sde_smart_expected.json |  46 
 test/disk_tests/ssd_smart/sde_udevadm |  11 +
 32 files changed, 878 insertions(+)
 create mode 100644 test/disk_tests/ssd_smart/disklist
 create mode 100644 test/disk_tests/ssd_smart/disklist_expected.json
 create mode 100644 test/disk_tests/ssd_smart/sda/device/vendor
 create mode 100644 test/disk_tests/ssd_smart/sda/queue/rotational
 create mode 100644 test/disk_tests/ssd_smart/sda/size
 create mode 100644 test/disk_tests/ssd_smart/sda_smart
 create mode 100644 test/disk_tests/ssd_smart/sda_smart_expected.json
 create mode 100644 test/disk_tests/ssd_smart/sda_udevadm
 create mode 100644 test/disk_tests/ssd_smart/sdb/device/vendor
 create mode 100644 test/disk_tests/ssd_smart/sdb/queue/rotational
 create mode 100644 test/disk_tests/ssd_smart/sdb/size
 create mode 100644 test/disk_tests/ssd_smart/sdb_smart
 create mode 100644 test/disk_tests/ssd_smart/sdb_smart_expected.json
 create mode 100644 test/disk_tests/ssd_smart/sdb_udevadm
 create mode 100644 test/disk_tests/ssd_smart/sdc/device/vendor
 create mode 100644 test/disk_tests/ssd_smart/sdc/queue/rotational
 create mode 100644 test/disk_tests/ssd_smart/sdc/size
 create mode 100644 test/disk_tests/ssd_smart/sdc_smart
 create mode 100644 test/disk_tests/ssd_smart/sdc_smart_expected.json
 create mode 100644 test/disk_tests/ssd_smart/sdc_udevadm
 create mode 100644 test/disk_tests/ssd_smart/sdd/device/vendor
 create mode 100644 test/disk_tests/ssd_smart/sdd/queue/rotational
 create mode 100644 test/disk_tests/ssd_smart/sdd/size
 create mode 100644 test/disk_tests/ssd_smart/sdd_smart
 create mode 100644 test/disk_tests/ssd_smart/sdd_smart_expected.json
 create mode 100644 test/disk_tests/ssd_smart/sdd_udevadm
 create mode 100644 test/disk_tests/ssd_smart/sde/device/vendor
 create mode 100644 test/disk_tests/ssd_smart/sde/queue/rotational
 create mode 100644 test/disk_tests/ssd_smart/sde/size
 create mode 100644 test/disk_tests/ssd_smart/sde_smart
 create mode 100644 test/disk_tests/ssd_smart/sde_smart_expected.json
 create mode 100644 test/disk_tests/ssd_smart/sde_udevadm

diff --git a/test/disk_tests/ssd_smart/disklist 
b/test/disk_tests/ssd_smart/disklist
new file mode 100644
index 000..18048d5
--- /dev/null
+++ b/test/disk_tests/ssd_smart/disklist
@@ -0,0 +1,5 @@
+sda
+sdb
+sdc
+sdd
+sde
diff --git a/test/disk_tests/ssd_smart/disklist_expected.json 
b/test/disk_tests/ssd_smart/disklist_expected.json
new file mode 100644
index 000..f00717f
--- /dev/null
+++ b/test/disk_tests/ssd_smart/disklist_expected.json
@@ -0,0 +1,77 @@
+{
+"sda" : {
+   "serial" : "",
+   "vendor" : "ATA",
+   "rpm" : 0,
+   "gpt" : 1,
+   "health" : "PASSED",
+   "journals" : 0,
+   "wearout" : "N/A",
+   "osdid" : -1,
+   "size" : 512000,
+   "type" : "ssd",
+   "devpath" : "/dev/sda",
+   "model" : "Crucial_CT500MX200SSD1",
+   "wwn" : "0x"
+},
+"sdb" : {
+   "model" : "INTEL_SSDSC2BB080G6",
+   "devpath" : "/dev/sdb",
+   "osdid" : -1,
+   

[pve-devel] [PATCH storage 4/7] add nvme regression test

2016-10-17 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 test/disk_tests/nvme_smart/disklist|  1 +
 test/disk_tests/nvme_smart/disklist_expected.json  | 17 +
 test/disk_tests/nvme_smart/nvme0_smart | 22 ++
 test/disk_tests/nvme_smart/nvme0n1/device/model|  1 +
 .../disk_tests/nvme_smart/nvme0n1/queue/rotational |  1 +
 test/disk_tests/nvme_smart/nvme0n1/size|  1 +
 .../nvme_smart/nvme0n1_smart_expected.json |  5 +
 test/disk_tests/nvme_smart/nvme0n1_udevadm | 18 ++
 8 files changed, 66 insertions(+)
 create mode 100644 test/disk_tests/nvme_smart/disklist
 create mode 100644 test/disk_tests/nvme_smart/disklist_expected.json
 create mode 100644 test/disk_tests/nvme_smart/nvme0_smart
 create mode 100644 test/disk_tests/nvme_smart/nvme0n1/device/model
 create mode 100644 test/disk_tests/nvme_smart/nvme0n1/queue/rotational
 create mode 100644 test/disk_tests/nvme_smart/nvme0n1/size
 create mode 100644 test/disk_tests/nvme_smart/nvme0n1_smart_expected.json
 create mode 100644 test/disk_tests/nvme_smart/nvme0n1_udevadm

diff --git a/test/disk_tests/nvme_smart/disklist 
b/test/disk_tests/nvme_smart/disklist
new file mode 100644
index 000..d00b90e
--- /dev/null
+++ b/test/disk_tests/nvme_smart/disklist
@@ -0,0 +1 @@
+nvme0n1
diff --git a/test/disk_tests/nvme_smart/disklist_expected.json 
b/test/disk_tests/nvme_smart/disklist_expected.json
new file mode 100644
index 000..ac34d0f
--- /dev/null
+++ b/test/disk_tests/nvme_smart/disklist_expected.json
@@ -0,0 +1,17 @@
+{
+"nvme0n1" : {
+   "wearout" : "N/A",
+   "vendor" : "unknown",
+   "size" : 512000,
+   "journals" : 0,
+   "health" : "PASSED",
+   "serial" : "unknown",
+   "model" : "NVME MODEL 1",
+   "rpm" : 0,
+   "osdid" : -1,
+   "devpath" : "/dev/nvme0n1",
+   "gpt" : 0,
+   "wwn" : "unknown",
+   "type" : "ssd"
+}
+}
diff --git a/test/disk_tests/nvme_smart/nvme0_smart 
b/test/disk_tests/nvme_smart/nvme0_smart
new file mode 100644
index 000..2e6aaea
--- /dev/null
+++ b/test/disk_tests/nvme_smart/nvme0_smart
@@ -0,0 +1,22 @@
+smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.4.19-1-pve] (local build)
+Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
+
+=== START OF SMART DATA SECTION ===
+SMART overall-health self-assessment test result: PASSED
+
+SMART/Health Information (NVMe Log 0x02, NSID 0x)
+Critical Warning:   0x00
+Temperature:32 Celsius
+Available Spare:100%
+Available Spare Threshold:  10%
+Percentage Used:0%
+Data Units Read:1,299,288 [665 GB]
+Data Units Written: 5,592,478 [2.86 TB]
+Host Read Commands: 30,360,807
+Host Write Commands:470,356,196
+Controller Busy Time:   12
+Power Cycles:   98
+Power On Hours: 687
+Unsafe Shutdowns:   21
+Media and Data Integrity Errors:0
+Error Information Log Entries:  0
diff --git a/test/disk_tests/nvme_smart/nvme0n1/device/model 
b/test/disk_tests/nvme_smart/nvme0n1/device/model
new file mode 100644
index 000..9bd6eba
--- /dev/null
+++ b/test/disk_tests/nvme_smart/nvme0n1/device/model
@@ -0,0 +1 @@
+NVME MODEL 1
diff --git a/test/disk_tests/nvme_smart/nvme0n1/queue/rotational 
b/test/disk_tests/nvme_smart/nvme0n1/queue/rotational
new file mode 100644
index 000..573541a
--- /dev/null
+++ b/test/disk_tests/nvme_smart/nvme0n1/queue/rotational
@@ -0,0 +1 @@
+0
diff --git a/test/disk_tests/nvme_smart/nvme0n1/size 
b/test/disk_tests/nvme_smart/nvme0n1/size
new file mode 100644
index 000..83b33d2
--- /dev/null
+++ b/test/disk_tests/nvme_smart/nvme0n1/size
@@ -0,0 +1 @@
+1000
diff --git a/test/disk_tests/nvme_smart/nvme0n1_smart_expected.json 
b/test/disk_tests/nvme_smart/nvme0n1_smart_expected.json
new file mode 100644
index 000..00bece2
--- /dev/null
+++ b/test/disk_tests/nvme_smart/nvme0n1_smart_expected.json
@@ -0,0 +1,5 @@
+{
+"text" : "\nSMART/Health Information (NVMe Log 0x02, NSID 
0x)\nCritical Warning:   0x00\nTemperature: 
   32 Celsius\nAvailable Spare:100%\nAvailable 
Spare Threshold:  10%\nPercentage Used:0%\nData 
Units Read:1,299,288 [665 GB]\nData Units Written:  
   5,592,478 [2.86 TB]\nHost Read Commands: 
30,360,807\nHost Write Commands:470,356,196\nController Busy 
Time:   12\nPower Cycles:   98\nPower On Hours: 
687\nUnsafe Shutdowns:   21\nMedia and Data 
Integrity Errors:0\nError Information Log Entries:  0\n",
+"health" : "PASSED",
+"type" : "text"
+}
diff --git 

[pve-devel] [PATCH storage 2/7] add hdd and smart regression tests

2016-10-17 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 test/disk_tests/hdd_smart/disklist|   2 +
 test/disk_tests/hdd_smart/disklist_expected.json  |  32 +++
 test/disk_tests/hdd_smart/sda/device/vendor   |   1 +
 test/disk_tests/hdd_smart/sda/queue/rotational|   1 +
 test/disk_tests/hdd_smart/sda/size|   1 +
 test/disk_tests/hdd_smart/sda_health  |   5 +
 test/disk_tests/hdd_smart/sda_smart   |  40 
 test/disk_tests/hdd_smart/sda_smart_expected.json | 246 ++
 test/disk_tests/hdd_smart/sda_udevadm |  11 +
 test/disk_tests/hdd_smart/sdb/device/vendor   |   1 +
 test/disk_tests/hdd_smart/sdb/queue/rotational|   1 +
 test/disk_tests/hdd_smart/sdb/size|   1 +
 test/disk_tests/hdd_smart/sdb_health  |   5 +
 test/disk_tests/hdd_smart/sdb_smart   |  36 
 test/disk_tests/hdd_smart/sdb_smart_expected.json | 216 +++
 test/disk_tests/hdd_smart/sdb_udevadm |  11 +
 16 files changed, 610 insertions(+)
 create mode 100644 test/disk_tests/hdd_smart/disklist
 create mode 100644 test/disk_tests/hdd_smart/disklist_expected.json
 create mode 100644 test/disk_tests/hdd_smart/sda/device/vendor
 create mode 100644 test/disk_tests/hdd_smart/sda/queue/rotational
 create mode 100644 test/disk_tests/hdd_smart/sda/size
 create mode 100644 test/disk_tests/hdd_smart/sda_health
 create mode 100644 test/disk_tests/hdd_smart/sda_smart
 create mode 100644 test/disk_tests/hdd_smart/sda_smart_expected.json
 create mode 100644 test/disk_tests/hdd_smart/sda_udevadm
 create mode 100644 test/disk_tests/hdd_smart/sdb/device/vendor
 create mode 100644 test/disk_tests/hdd_smart/sdb/queue/rotational
 create mode 100644 test/disk_tests/hdd_smart/sdb/size
 create mode 100644 test/disk_tests/hdd_smart/sdb_health
 create mode 100644 test/disk_tests/hdd_smart/sdb_smart
 create mode 100644 test/disk_tests/hdd_smart/sdb_smart_expected.json
 create mode 100644 test/disk_tests/hdd_smart/sdb_udevadm

diff --git a/test/disk_tests/hdd_smart/disklist 
b/test/disk_tests/hdd_smart/disklist
new file mode 100644
index 000..9f6776c
--- /dev/null
+++ b/test/disk_tests/hdd_smart/disklist
@@ -0,0 +1,2 @@
+sda
+sdb
diff --git a/test/disk_tests/hdd_smart/disklist_expected.json 
b/test/disk_tests/hdd_smart/disklist_expected.json
new file mode 100644
index 000..7685f5f
--- /dev/null
+++ b/test/disk_tests/hdd_smart/disklist_expected.json
@@ -0,0 +1,32 @@
+{
+"sdb" : {
+   "devpath" : "/dev/sdb",
+   "size" : 1024000,
+   "gpt" : 1,
+   "osdid" : -1,
+   "rpm" : 7200,
+   "model" : "ST4000NM0033-9ZM170",
+   "vendor" : "ATA",
+   "health" : "PASSED",
+   "type" : "hdd",
+   "wwn" : "0x",
+   "journals" : 0,
+   "wearout" : "N/A",
+   "serial" : ""
+},
+"sda" : {
+   "osdid" : -1,
+   "size" : 1024000,
+   "gpt" : 1,
+   "devpath" : "/dev/sda",
+   "model" : "ST4000DM000-1F2168",
+   "rpm" : 5900,
+   "type" : "hdd",
+   "health" : "PASSED",
+   "vendor" : "ATA",
+   "serial" : "",
+   "wearout" : "N/A",
+   "journals" : 0,
+   "wwn" : "0x"
+}
+}
diff --git a/test/disk_tests/hdd_smart/sda/device/vendor 
b/test/disk_tests/hdd_smart/sda/device/vendor
new file mode 100644
index 000..531030d
--- /dev/null
+++ b/test/disk_tests/hdd_smart/sda/device/vendor
@@ -0,0 +1 @@
+ATA
diff --git a/test/disk_tests/hdd_smart/sda/queue/rotational 
b/test/disk_tests/hdd_smart/sda/queue/rotational
new file mode 100644
index 000..d00491f
--- /dev/null
+++ b/test/disk_tests/hdd_smart/sda/queue/rotational
@@ -0,0 +1 @@
+1
diff --git a/test/disk_tests/hdd_smart/sda/size 
b/test/disk_tests/hdd_smart/sda/size
new file mode 100644
index 000..8bd1af1
--- /dev/null
+++ b/test/disk_tests/hdd_smart/sda/size
@@ -0,0 +1 @@
+2000
diff --git a/test/disk_tests/hdd_smart/sda_health 
b/test/disk_tests/hdd_smart/sda_health
new file mode 100644
index 000..faf4ce3
--- /dev/null
+++ b/test/disk_tests/hdd_smart/sda_health
@@ -0,0 +1,5 @@
+smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.4.21-1-pve] (local build)
+Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
+
+=== START OF READ SMART DATA SECTION ===
+SMART overall-health self-assessment test result: PASSED
diff --git a/test/disk_tests/hdd_smart/sda_smart 
b/test/disk_tests/hdd_smart/sda_smart
new file mode 100644
index 000..a3f8f0a
--- /dev/null
+++ b/test/disk_tests/hdd_smart/sda_smart
@@ -0,0 +1,40 @@
+smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.4.21-1-pve] (local build)
+Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
+
+=== START OF READ SMART DATA SECTION ===
+SMART overall-health self-assessment test result: PASSED
+
+SMART Attributes Data Structure revision number: 10
+Vendor Specific SMART Attributes with Thresholds:
+ID# 

[pve-devel] [PATCH storage 1/7] add disklist test

2016-10-17 Thread Dominik Csapak
and add to makefile

Signed-off-by: Dominik Csapak 
---
 test/Makefile  |   5 +-
 test/disklist_test.pm  | 216 +
 test/run_disk_tests.pl |  10 +++
 3 files changed, 230 insertions(+), 1 deletion(-)
 create mode 100644 test/disklist_test.pm
 create mode 100755 test/run_disk_tests.pl

diff --git a/test/Makefile b/test/Makefile
index 33d5ff8..b44d7be 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,9 @@
 all: test
 
-test: test_zfspoolplugin
+test: test_zfspoolplugin test_disklist
 
 test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
+
+test_disklist: run_disk_tests.pl
+   ./run_disk_tests.pl
diff --git a/test/disklist_test.pm b/test/disklist_test.pm
new file mode 100644
index 000..b2bde49
--- /dev/null
+++ b/test/disklist_test.pm
@@ -0,0 +1,216 @@
+package PVE::Diskmanage::Test;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Diskmanage;
+use PVE::Tools;
+
+use Test::MockModule;
+use Test::More;
+use JSON;
+use Data::Dumper;
+
+my $testcasedir; # current case directory
+my $testcount = 0; # testcount for TAP::Harness
+my $diskmanage_module; # mockmodule for PVE::Diskmanage
+my $print = 0;
+
+sub mocked_run_command {
+my ($cmd, %param) = @_;
+
+my $outputlines = [];
+if (my $ref = ref($cmd)) {
+   if ($cmd->[0] =~ m/udevadm/i) {
+   # simulate udevadm output
+   my $dev = $cmd->[3];
+   $dev =~ s|/sys/block/||;
+   @$outputlines = split(/\n/, read_test_file("${dev}_udevadm"));
+
+   } elsif ($cmd->[0] =~ m/smartctl/i) {
+   # simulate smartctl output
+   my $dev;
+   my $type;
+   if (@$cmd > 3) {
+   $dev = $cmd->[5];
+   $type = 'smart';
+   } else {
+   $dev = $cmd->[2];
+   $type = 'health';
+   }
+   $dev =~ s|/dev/||;
+   @$outputlines = split(/\n/, read_test_file("${dev}_${type}"));
+   } elsif ($cmd->[0] =~ m/sgdisk/i) {
+   # simulate sgdisk
+   die "implement me: @$cmd\n";
+   } elsif ($cmd->[0] =~ m/zpool/i) {
+   # simulate zpool output
+   @$outputlines = split(/\n/, read_test_file('zpool'));
+
+   } elsif ($cmd->[0] =~ m/pvs/i) {
+   # simulate lvs output
+   @$outputlines = split(/\n/, read_test_file('pvs'));
+   } else {
+   print "unexpected run_command call: '@$cmd', aborting\n";
+   die;
+   }
+} else {
+   print "unexpected run_command call: '@$cmd', aborting\n";
+   die;
+}
+
+my $outfunc;
+if ($param{outfunc}) {
+   $outfunc = $param{outfunc};
+   map { &$outfunc(($_)) } @$outputlines;
+
+   return 0;
+}
+}
+
+sub mocked_get_sysdir_info {
+my ($param) = @_;
+
+my $originalsub = $diskmanage_module->original('get_sysdir_info');
+
+$param =~ s|/sys/block|disk_tests/$testcasedir|;
+
+return &$originalsub($param);
+}
+
+sub mocked_dir_glob_foreach {
+my ($dir, $regex, $sub) = @_;
+
+my $lines = [];
+
+# read lines in from file
+if ($dir =~ m{^/sys/block$} ) {
+   @$lines = split(/\n/, read_test_file('disklist'));
+} elsif ($dir =~ m{^/sys/block/([^/]+)}) {
+   @$lines = split(/\n/, read_test_file('partlist'));
+}
+
+foreach my $line (@$lines) {
+   if ($line =~ m/$regex/) {
+   &$sub($line);
+   }
+}
+}
+
+sub mocked_parse_proc_mounts {
+my $text = read_test_file('mounts');
+
+my $mounts = [];
+
+foreach my $line(split(/\n/, $text)) {
+   push @$mounts, [split(/\s+/, $line)];
+}
+
+return $mounts;
+}
+
+sub read_test_file {
+my ($filename) = @_;
+
+if (!-f  "disk_tests/$testcasedir/$filename") {
+   print "file '$testcasedir/$filename' not found\n";
+   return '';
+}
+open (my $fh, '<', "disk_tests/$testcasedir/$filename")
+   or die "Cannot open disk_tests/$testcasedir/$filename: $!";
+
+my $output = <$fh> // '';
+chomp $output if $output;
+while (my $line = <$fh>) {
+   chomp $line;
+   $output .= "\n$line";
+}
+
+return $output;
+}
+
+
+sub test_disk_list {
+my ($testdir) = @_;
+subtest "Test '$testdir'" => sub {
+   my $testcount = 0;
+   $testcasedir = $testdir;
+
+   my $disks;
+   my $expected_disk_list;
+   eval {
+   $disks = PVE::Diskmanage::get_disks();
+   };
+   warn $@ if $@;
+   $expected_disk_list = 
decode_json(read_test_file('disklist_expected.json'));
+
+   print Dumper($disks) if $print;
+   $testcount++;
+   is_deeply($disks, $expected_disk_list, 'disk list should be the same');
+
+   foreach my $disk (sort keys %$disks) {
+   my $smart;
+   my $expected_smart;
+   eval {
+   $smart = PVE::Diskmanage::get_smart_data("/dev/$disk");
+   print Dumper($smart) if $print;
+   

[pve-devel] [PATCH storage 0/7] add disklist and smart regression test

2016-10-17 Thread Dominik Csapak
this patch series adds regression tests
for the disklist and smart parsing

this needs my previous patch series to work

Dominik Csapak (7):
  add disklist test
  add hdd and smart regression tests
  add ssd and smart regression tests
  add nvme regression test
  add sas regression tests
  add cciss regression test
  add disk usages regression test

 test/Makefile  |   5 +-
 test/disk_tests/cciss/cciss!c0d0/device/model  |   1 +
 test/disk_tests/cciss/cciss!c0d0/device/vendor |   1 +
 test/disk_tests/cciss/cciss!c0d0/queue/rotational  |   1 +
 test/disk_tests/cciss/cciss!c0d0/size  |   1 +
 test/disk_tests/cciss/cciss!c0d0_udevadm   |  32 +++
 test/disk_tests/cciss/disklist |   1 +
 test/disk_tests/cciss/disklist_expected.json   |  17 ++
 test/disk_tests/hdd_smart/disklist |   2 +
 test/disk_tests/hdd_smart/disklist_expected.json   |  32 +++
 test/disk_tests/hdd_smart/sda/device/vendor|   1 +
 test/disk_tests/hdd_smart/sda/queue/rotational |   1 +
 test/disk_tests/hdd_smart/sda/size |   1 +
 test/disk_tests/hdd_smart/sda_health   |   5 +
 test/disk_tests/hdd_smart/sda_smart|  40 
 test/disk_tests/hdd_smart/sda_smart_expected.json  | 246 
 test/disk_tests/hdd_smart/sda_udevadm  |  11 +
 test/disk_tests/hdd_smart/sdb/device/vendor|   1 +
 test/disk_tests/hdd_smart/sdb/queue/rotational |   1 +
 test/disk_tests/hdd_smart/sdb/size |   1 +
 test/disk_tests/hdd_smart/sdb_health   |   5 +
 test/disk_tests/hdd_smart/sdb_smart|  36 +++
 test/disk_tests/hdd_smart/sdb_smart_expected.json  | 216 +
 test/disk_tests/hdd_smart/sdb_udevadm  |  11 +
 test/disk_tests/nvme_smart/disklist|   1 +
 test/disk_tests/nvme_smart/disklist_expected.json  |  17 ++
 test/disk_tests/nvme_smart/nvme0_smart |  22 ++
 test/disk_tests/nvme_smart/nvme0n1/device/model|   1 +
 .../disk_tests/nvme_smart/nvme0n1/queue/rotational |   1 +
 test/disk_tests/nvme_smart/nvme0n1/size|   1 +
 .../nvme_smart/nvme0n1_smart_expected.json |   5 +
 test/disk_tests/nvme_smart/nvme0n1_udevadm |  18 ++
 test/disk_tests/sas/disklist   |   1 +
 test/disk_tests/sas/disklist_expected.json |  17 ++
 test/disk_tests/sas/sda/device/model   |   1 +
 test/disk_tests/sas/sda/device/vendor  |   1 +
 test/disk_tests/sas/sda/queue/rotational   |   1 +
 test/disk_tests/sas/sda/size   |   1 +
 test/disk_tests/sas/sda_smart  |  20 ++
 test/disk_tests/sas/sda_smart_expected.json|   5 +
 test/disk_tests/sas/sda_udevadm|  31 +++
 test/disk_tests/ssd_smart/disklist |   5 +
 test/disk_tests/ssd_smart/disklist_expected.json   |  77 +++
 test/disk_tests/ssd_smart/sda/device/vendor|   1 +
 test/disk_tests/ssd_smart/sda/queue/rotational |   1 +
 test/disk_tests/ssd_smart/sda/size |   1 +
 test/disk_tests/ssd_smart/sda_smart|  39 
 test/disk_tests/ssd_smart/sda_smart_expected.json  | 236 +++
 test/disk_tests/ssd_smart/sda_udevadm  |  11 +
 test/disk_tests/ssd_smart/sdb/device/vendor|   1 +
 test/disk_tests/ssd_smart/sdb/queue/rotational |   1 +
 test/disk_tests/ssd_smart/sdb/size |   1 +
 test/disk_tests/ssd_smart/sdb_smart|  41 
 test/disk_tests/ssd_smart/sdb_smart_expected.json  | 256 +
 test/disk_tests/ssd_smart/sdb_udevadm  |  12 +
 test/disk_tests/ssd_smart/sdc/device/vendor|   1 +
 test/disk_tests/ssd_smart/sdc/queue/rotational |   1 +
 test/disk_tests/ssd_smart/sdc/size |   1 +
 test/disk_tests/ssd_smart/sdc_smart|  16 ++
 test/disk_tests/ssd_smart/sdc_smart_expected.json  |  16 ++
 test/disk_tests/ssd_smart/sdc_udevadm  |  11 +
 test/disk_tests/ssd_smart/sdd/device/vendor|   1 +
 test/disk_tests/ssd_smart/sdd/queue/rotational |   1 +
 test/disk_tests/ssd_smart/sdd/size |   1 +
 test/disk_tests/ssd_smart/sdd_smart|  40 
 test/disk_tests/ssd_smart/sdd_smart_expected.json  |  16 ++
 test/disk_tests/ssd_smart/sdd_udevadm  |  11 +
 test/disk_tests/ssd_smart/sde/device/vendor|   1 +
 test/disk_tests/ssd_smart/sde/queue/rotational |   1 +
 test/disk_tests/ssd_smart/sde/size |   1 +
 test/disk_tests/ssd_smart/sde_smart|  19 ++
 test/disk_tests/ssd_smart/sde_smart_expected.json  |  46 
 test/disk_tests/ssd_smart/sde_udevadm  |  11 +
 test/disk_tests/usages/disklist|   6 +
 test/disk_tests/usages/disklist_expected.json  |  97 
 

[pve-devel] [PATCH storage 6/7] add cciss regression test

2016-10-17 Thread Dominik Csapak
Signed-off-by: Dominik Csapak 
---
 test/disk_tests/cciss/cciss!c0d0/device/model |  1 +
 test/disk_tests/cciss/cciss!c0d0/device/vendor|  1 +
 test/disk_tests/cciss/cciss!c0d0/queue/rotational |  1 +
 test/disk_tests/cciss/cciss!c0d0/size |  1 +
 test/disk_tests/cciss/cciss!c0d0_udevadm  | 32 +++
 test/disk_tests/cciss/disklist|  1 +
 test/disk_tests/cciss/disklist_expected.json  | 17 
 7 files changed, 54 insertions(+)
 create mode 100644 test/disk_tests/cciss/cciss!c0d0/device/model
 create mode 100644 test/disk_tests/cciss/cciss!c0d0/device/vendor
 create mode 100644 test/disk_tests/cciss/cciss!c0d0/queue/rotational
 create mode 100644 test/disk_tests/cciss/cciss!c0d0/size
 create mode 100644 test/disk_tests/cciss/cciss!c0d0_udevadm
 create mode 100644 test/disk_tests/cciss/disklist
 create mode 100644 test/disk_tests/cciss/disklist_expected.json

diff --git a/test/disk_tests/cciss/cciss!c0d0/device/model 
b/test/disk_tests/cciss/cciss!c0d0/device/model
new file mode 100644
index 000..676b97d
--- /dev/null
+++ b/test/disk_tests/cciss/cciss!c0d0/device/model
@@ -0,0 +1 @@
+LOGICAL_VOLUME
diff --git a/test/disk_tests/cciss/cciss!c0d0/device/vendor 
b/test/disk_tests/cciss/cciss!c0d0/device/vendor
new file mode 100644
index 000..e8b2ad6
--- /dev/null
+++ b/test/disk_tests/cciss/cciss!c0d0/device/vendor
@@ -0,0 +1 @@
+HP
diff --git a/test/disk_tests/cciss/cciss!c0d0/queue/rotational 
b/test/disk_tests/cciss/cciss!c0d0/queue/rotational
new file mode 100644
index 000..d00491f
--- /dev/null
+++ b/test/disk_tests/cciss/cciss!c0d0/queue/rotational
@@ -0,0 +1 @@
+1
diff --git a/test/disk_tests/cciss/cciss!c0d0/size 
b/test/disk_tests/cciss/cciss!c0d0/size
new file mode 100644
index 000..f599e28
--- /dev/null
+++ b/test/disk_tests/cciss/cciss!c0d0/size
@@ -0,0 +1 @@
+10
diff --git a/test/disk_tests/cciss/cciss!c0d0_udevadm 
b/test/disk_tests/cciss/cciss!c0d0_udevadm
new file mode 100644
index 000..21e8899
--- /dev/null
+++ b/test/disk_tests/cciss/cciss!c0d0_udevadm
@@ -0,0 +1,32 @@
+P: /devices/pci:40/:40:13.0/:45:00.0/cciss0/c0d0/block/cciss!c0d0
+N: cciss/c0d0
+S: disk/by-id/cciss-SERIAL111
+S: disk/by-id/wwn-0x
+S: disk/by-path/pci-:45:00.0-cciss-disk0
+E: DEVLINKS=/dev/disk/by-id/cciss-0 
/dev/disk/by-id/wwn-0x0/dev/disk/by-path/pci-:45:00.0-cciss-disk0
+E: DEVNAME=/dev/cciss/c0d0
+E: 
DEVPATH=/devices/pci:40/:40:13.0/:45:00.0/cciss0/c0d0/block/cciss!c0d0
+E: DEVTYPE=disk
+E: ID_BUS=cciss
+E: ID_MODEL=LOGICAL_VOLUME
+E: ID_MODEL_ENC=LOGICAL\x20VOLUME\x20\x20
+E: ID_PART_TABLE_TYPE=gpt
+E: ID_PART_TABLE_UUID=cfe72deb-65d1-487c-bdfa-8af66dc1a969
+E: ID_PATH=pci-:45:00.0-cciss-disk0
+E: ID_PATH_TAG=pci-_45_00_0-cciss-disk0
+E: ID_REVISION=7.24
+E: ID_SCSI=1
+E: ID_SCSI_SERIAL=SERIAL1
+E: ID_SERIAL=SERIAL111
+E: ID_SERIAL_SHORT=SER111
+E: ID_TYPE=disk
+E: ID_VENDOR=HP
+E: ID_VENDOR_ENC=HP\x20\x20\x20\x20\x20\x20
+E: ID_WWN=0x
+E: ID_WWN_VENDOR_EXTENSION=0x
+E: ID_WWN_WITH_EXTENSION=0x
+E: MAJOR=104
+E: MINOR=0
+E: SUBSYSTEM=block
+E: TAGS=:systemd:
+E: USEC_INITIALIZED=2247
diff --git a/test/disk_tests/cciss/disklist b/test/disk_tests/cciss/disklist
new file mode 100644
index 000..aa174aa
--- /dev/null
+++ b/test/disk_tests/cciss/disklist
@@ -0,0 +1 @@
+cciss!c0d0
diff --git a/test/disk_tests/cciss/disklist_expected.json 
b/test/disk_tests/cciss/disklist_expected.json
new file mode 100644
index 000..fb071ef
--- /dev/null
+++ b/test/disk_tests/cciss/disklist_expected.json
@@ -0,0 +1,17 @@
+{
+"cciss!c0d0" : {
+   "wearout" : "N/A",
+   "vendor" : "HP",
+   "rpm" : -1,
+   "type" : "unknown",
+   "serial" : "SER111",
+   "osdid" : -1,
+   "health" : "UNKNOWN",
+   "model" : "LOGICAL_VOLUME",
+   "size" : 5120,
+   "wwn" : "0x",
+   "journals" : 0,
+   "gpt" : 1,
+   "devpath" : "/dev/cciss/c0d0"
+}
+}
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 4/5] use model from udevadm

2016-10-17 Thread Dominik Csapak
we want this, because the model in /sys/block//device/model
is limited to 16 characters

and since the model is not always in the udevadm output (nvme),
also read the model from the model file as fallback

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index ad1a896..c8706b7 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -255,6 +255,10 @@ sub get_udev_info {
$data->{usb} = 1;
 }
 
+if ($info =~ m/^E: ID_MODEL=(.+)$/m) {
+   $data->{model} = $1;
+}
+
 $data->{wwn} = 'unknown';
 if ($info =~ m/^E: ID_WWN=(.*)$/m) {
$data->{wwn} = $1;
@@ -413,7 +417,7 @@ sub get_disks {
 
if ($type eq 'ssd') {
# if we have an ssd we try to get the wearout indicator
-   my $wearval = 
get_wear_leveling_info($smartdata->{attributes}, $sysdata->{model});
+   my $wearval = 
get_wear_leveling_info($smartdata->{attributes}, $data->{model} || 
$sysdir->{model});
$wearout = $wearval if $wearval;
}
};
@@ -429,7 +433,7 @@ sub get_disks {
 
$disklist->{$dev} = {
vendor => $sysdata->{vendor},
-   model => $sysdata->{model},
+   model => $data->{model} || $sysdata->{model},
size => $sysdata->{size},
serial => $data->{serial},
gpt => $data->{gpt},
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 5/5] add default rotational value

2016-10-17 Thread Dominik Csapak
because if the file does not exist,
we have an perl error for comparing an uninitialized
value

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index c8706b7..0bd0556 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -282,7 +282,7 @@ sub get_sysdir_info {
 $data->{size} = $size * 512;
 
 # dir/queue/rotational should be 1 for hdd, 0 for ssd
-$data->{rotational} = file_read_firstline("$sysdir/queue/rotational");
+$data->{rotational} = file_read_firstline("$sysdir/queue/rotational") || 
-1;
 
 $data->{vendor} = file_read_firstline("$sysdir/device/vendor") || 
'unknown';
 $data->{model} = file_read_firstline("$sysdir/device/model") || 'unknown';
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 1/5] use /sys/block/ path for udev instead of name

2016-10-17 Thread Dominik Csapak
since we iterate over the entries in /sys/block
it makes sense to use this path

this should fix #1099
because udevadm does not take
 -n cciss!c0d0 (because it only looks in dev for this)
but takes
 -p /sys/block/cciss!c0d0

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 6cc5e1c..8382045 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -217,7 +217,7 @@ sub get_udev_info {
 my $info = "";
 my $data = {};
 eval {
-   run_command([$UDEVADM, 'info', '-n', $dev, '--query', 'all'], outfunc 
=> sub {
+   run_command([$UDEVADM, 'info', '-p', $dev, '--query', 'all'], outfunc 
=> sub {
my ($line) = @_;
$info .= "$line\n";
});
@@ -375,7 +375,7 @@ sub get_disks {
  $dev !~ m/^nvme\d+n\d+$/ &&
  $dev !~ m/^cciss\!c\d+d\d+$/;
 
-   my $data = get_udev_info($dev);
+   my $data = get_udev_info("/sys/block/$dev");
return if !defined($data);
my $devpath = $data->{devpath};
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 2/5] move directory test into get_sysdir_info

2016-10-17 Thread Dominik Csapak
because it logically belongs there, also
this makes the testing easier

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 8382045..dd2591c 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -266,6 +266,8 @@ sub get_udev_info {
 sub get_sysdir_info {
 my ($sysdir) = @_;
 
+return undef if ! -d "$sysdir/device";
+
 my $data = {};
 
 my $size = file_read_firstline("$sysdir/size");
@@ -381,10 +383,8 @@ sub get_disks {
 
my $sysdir = "/sys/block/$dev";
 
-   return if ! -d "$sysdir/device";
-
# we do not want iscsi devices
-   return if readlink($sysdir) =~ m|host[^/]*/session[^/]*|;
+   return if -l $sysdir && readlink($sysdir) =~ m|host[^/]*/session[^/]*|;
 
my $sysdata = get_sysdir_info($sysdir);
return if !defined($sysdata);
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 0/5] disklist regression test preparation

2016-10-17 Thread Dominik Csapak
this patch series prepares the disklist module
for my upcoming regression tests

the only behavioural change is the parameter of udevadm (patch 1/5)
from
udevadm -n device_name
to
udevadm -p /path/to/blockdev

which should also fix a bug (see the commit message for details)

Dominik Csapak (5):
  use /sys/block/ path for udev instead of name
  move directory test into get_sysdir_info
  make dir_is_empty a proper sub
  use model from udevadm
  add default rotational value

 PVE/Diskmanage.pm | 54 +-
 1 file changed, 29 insertions(+), 25 deletions(-)

-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage 3/5] make dir_is_empty a proper sub

2016-10-17 Thread Dominik Csapak
this allows us later to mock the sub,
which we need for testing

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 34 +-
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index dd2591c..ad1a896 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -324,6 +324,21 @@ sub get_wear_leveling_info {
 return $wearout;
 }
 
+sub dir_is_empty {
+my ($dir) = @_;
+
+my $dh = IO::Dir->new ($dir);
+return 1 if !$dh;
+
+while (defined(my $tmp = $dh->read)) {
+   next if $tmp eq '.' || $tmp eq '..';
+   $dh->close;
+   return 0;
+}
+$dh->close;
+return 1;
+}
+
 sub get_disks {
 my ($disk, $nosmart) = @_;
 my $disklist = {};
@@ -342,21 +357,6 @@ sub get_disks {
return $mounted->{$dev};
 };
 
-my $dir_is_empty = sub {
-   my ($dir) = @_;
-
-   my $dh = IO::Dir->new ($dir);
-   return 1 if !$dh;
-
-   while (defined(my $tmp = $dh->read)) {
-   next if $tmp eq '.' || $tmp eq '..';
-   $dh->close;
-   return 0;
-   }
-   $dh->close;
-   return 1;
-};
-
 my $journalhash = get_ceph_journals();
 
 my $zfslist = get_zfs_devices();
@@ -479,7 +479,7 @@ sub get_disks {
 
$journal_count++ if $journalhash->{"$partpath/$part"};
 
-   if (!&$dir_is_empty("$sysdir/$part/holders") && !$found_lvm)  {
+   if (!dir_is_empty("$sysdir/$part/holders") && !$found_lvm)  {
$found_dm = 1;
}
});
@@ -493,7 +493,7 @@ sub get_disks {
# multipath, software raid, etc.
# this check comes in last, to show more specific info
# if we have it
-   $used = 'Device Mapper' if !$used && !&$dir_is_empty("$sysdir/holders");
+   $used = 'Device Mapper' if !$used && !dir_is_empty("$sysdir/holders");
 
$disklist->{$dev}->{used} = $used if $used;
$disklist->{$dev}->{osdid} = $osdid;
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Include name of template in LXC conf file?

2016-10-17 Thread Brendan Jurd
On Mon, 17 Oct 2016 at 15:26 Dietmar Maurer  wrote:

> > Is there any reason why the LXC conf file should not retain the name of
> the
> > original template file?  I am prepared to submit a patch for this, if
> that
> > would be acceptable.
>
> We only store the OS type, for example:
>
> ostype: debian
> arch: amd64
>
> Storing the original template seems misleading to me, because users
> often update the container, so the template does not contain the
> same content as the container.
>
>
Did anybody find it misleading with OpenVZ containers?

I think the normal usage of the word "template" gives exactly the right
impression here.  Just as with a document template or a database template,
you create a new thing based on a template, and then you go on to modify
the thing.  I don't see how somebody would arrive at the idea that the
container must be perpetually identical to the template.

Would it alleviate your concern if the config variable were named something
like "original_template" or "creation_template"?

I guess it would be better to use the latest available
> template instead?
>

Not necessarily.  Say for example you've got a legacy application that is
only known to work under Debian 7, or you run heavily customised templates,
or you've used one of the Virtual Appliance templates.  In any of those
cases just starting with "latest available Debian" would not be the smart
move.

Cheers,
BJ
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel