[pve-devel] [PATCH manager v2 1/2] ceph: extend the pool view

2020-06-03 Thread Alwin Antreich
to add the pg_autoscale_mode since its activated in Ceph Octopus by
default and emmits a waring (ceph status) if a pool has too many PGs.

Signed-off-by: Alwin Antreich 
---
v1 -> v2: split addition of pg_autoscale_mode and pveceph pool
  output format

 PVE/API2/Ceph.pm  | 13 -
 www/manager6/ceph/Pool.js | 19 +++
 2 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index afc1bdbd..d872c7c0 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -607,6 +607,7 @@ __PACKAGE__->register_method ({
pool => { type => 'integer' },
pool_name => { type => 'string' },
size => { type => 'integer' },
+   pg_autoscale_mode => { type => 'string', optional => 1 },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -636,9 +637,19 @@ __PACKAGE__->register_method ({
}
 
my $data = [];
+   my $attr_list = [
+   'pool',
+   'pool_name',
+   'size',
+   'min_size',
+   'pg_num',
+   'crush_rule',
+   'pg_autoscale_mode',
+   ];
+
foreach my $e (@{$res->{pools}}) {
my $d = {};
-   foreach my $attr (qw(pool pool_name size min_size pg_num 
crush_rule)) {
+   foreach my $attr (@$attr_list) {
$d->{$attr} = $e->{$attr} if defined($e->{$attr});
}
 
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index e81b5974..db1828a6 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -107,10 +107,21 @@ Ext.define('PVE.node.CephPoolList', {
dataIndex: 'size'
},
{
-   text: '# Placement Groups', // pg_num',
-   width: 180,
-   align: 'right',
-   dataIndex: 'pg_num'
+   text: 'Placement Groups',
+   columns: [
+   {
+   text: '# of PGs', // pg_num',
+   width: 100,
+   align: 'right',
+   dataIndex: 'pg_num'
+   },
+   {
+   text: 'Autoscale Mode',
+   width: 140,
+   align: 'right',
+   dataIndex: 'pg_autoscale_mode'
+   },
+   ]
},
{
text: 'CRUSH Rule',
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 2/2] ceph: extend pveceph pool ls

2020-06-03 Thread Alwin Antreich
to present more data on pools and a nicer formated output on the command
line.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph.pm   | 14 ++
 PVE/CLI/pveceph.pm | 24 ++--
 2 files changed, 24 insertions(+), 14 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index d872c7c0..d7e5892c 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -604,10 +604,16 @@ __PACKAGE__->register_method ({
items => {
type => "object",
properties => {
-   pool => { type => 'integer' },
-   pool_name => { type => 'string' },
-   size => { type => 'integer' },
-   pg_autoscale_mode => { type => 'string', optional => 1 },
+   pool => { type => 'integer', title => 'ID' },
+   pool_name => { type => 'string', title => 'Name' },
+   size => { type => 'integer', title => 'Size' },
+   min_size => { type => 'integer', title => 'Min Size' },
+   pg_num => { type => 'integer', title => 'PG Num' },
+   pg_autoscale_mode => { type => 'string', optional => 1, title 
=> 'PG Autoscale Mode' },
+   crush_rule => { type => 'integer', title => 'Crush Rule' },
+   crush_rule_name => { type => 'string', title => 'Crush Rule 
Name' },
+   percent_used => { type => 'number', title => '%-Used' },
+   bytes_used => { type => 'integer', title => 'Used' },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 92500253..b4c8b79c 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -182,16 +182,20 @@ our $cmddef = {
 init => [ 'PVE::API2::Ceph', 'init', [], { node => $nodename } ],
 pool => {
ls => [ 'PVE::API2::Ceph', 'lspools', [], { node => $nodename }, sub {
-   my $res = shift;
-
-   printf("%-20s %10s %10s %10s %10s %20s\n", "Name", "size", 
"min_size",
-   "pg_num", "%-used", "used");
-   foreach my $p (sort {$a->{pool_name} cmp $b->{pool_name}} @$res) {
-   printf("%-20s %10d %10d %10d %10.2f %20d\n", $p->{pool_name},
-   $p->{size}, $p->{min_size}, $p->{pg_num},
-   $p->{percent_used}, $p->{bytes_used});
-   }
-   }],
+   my ($data, $schema, $options) = @_;
+   PVE::CLIFormatter::print_api_result($data, $schema,
+   [
+   'pool_name',
+   'size',
+   'min_size',
+   'pg_num',
+   'pg_autoscale_mode',
+   'crush_rule_name',
+   'percent_used',
+   'bytes_used',
+   ],
+   $options);
+   }, $PVE::RESTHandler::standard_output_options],
create => [ 'PVE::API2::Ceph', 'createpool', ['name'], { node => 
$nodename }],
destroy => [ 'PVE::API2::Ceph', 'destroypool', ['name'], { node => 
$nodename } ],
 },
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2] Make PVE6 compatible with supported ceph versions

2020-06-03 Thread Alwin Antreich
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.

The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.

Signed-off-by: Alwin Antreich 
---
v1 -> v2: make mon/mgr dump optional for Ceph versions prior Octopus

 PVE/API2/Ceph.pm  |  5 +
 PVE/API2/Ceph/MON.pm  |  6 +++---
 PVE/API2/Ceph/OSD.pm  |  2 +-
 PVE/API2/Cluster/Ceph.pm  |  5 +
 PVE/Ceph/Tools.pm | 17 +
 www/manager6/ceph/StatusDetail.js | 12 
 6 files changed, 31 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 85a04101..afc1bdbd 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -580,10 +580,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_inited();
 
-   my $rados = PVE::RADOS->new();
-   my $status = $rados->mon_command({ prefix => 'status' });
-   $status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
-   return $status;
+   return PVE::Ceph::Tools::ceph_cluster_status();
 }});
 
 __PACKAGE__->register_method ({
diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index 3baeac52..b33b8700 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -130,7 +130,7 @@ __PACKAGE__->register_method ({
my $monhash = PVE::Ceph::Services::get_services_info("mon", $cfg, 
$rados);
 
if ($rados) {
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
 
my $mons = $monstat->{monmap}->{mons};
foreach my $d (@$mons) {
@@ -338,7 +338,7 @@ __PACKAGE__->register_method ({
my $monsection = "mon.$monid";
 
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
my $monlist = $monstat->{monmap}->{mons};
my $monhash = PVE::Ceph::Services::get_services_info('mon', $cfg, 
$rados);
 
@@ -356,7 +356,7 @@ __PACKAGE__->register_method ({
# reopen with longer timeout
$rados = PVE::RADOS->new(timeout => 
PVE::Ceph::Tools::get_config('long_rados_timeout'));
$monhash = PVE::Ceph::Services::get_services_info('mon', $cfg, 
$rados);
-   $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   $monstat = $rados->mon_command({ prefix => 'quorum_status' });
$monlist = $monstat->{monmap}->{mons};
 
my $addr;
diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index a514c502..ceaed129 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -344,7 +344,7 @@ __PACKAGE__->register_method ({
 
# get necessary ceph infos
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
 
die "unable to get fsid\n" if !$monstat->{monmap} || 
!$monstat->{monmap}->{fsid};
my $fsid = $monstat->{monmap}->{fsid};
diff --git a/PVE/API2/Cluster/Ceph.pm b/PVE/API2/Cluster/Ceph.pm
index e18d421e..c0277221 100644
--- a/PVE/API2/Cluster/Ceph.pm
+++ b/PVE/API2/Cluster/Ceph.pm
@@ -142,10 +142,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_inited();
 
-   my $rados = PVE::RADOS->new();
-   my $status = $rados->mon_command({ prefix => 'status' });
-   $status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
-   return $status;
+   return PVE::Ceph::Tools::ceph_cluster_status();
 }
 });
 
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 3273c7d1..a73b791b 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -468,4 +468,21 @@ sub get_real_flag_name {
 return $flagmap->{$flag} // $flag;
 }
 
+sub ceph_cluster_status {
+my ($rados) = @_;
+$rados = PVE::RADOS->new() if !$rados;
+
+my $ceph_version = get_local_version(1);
+my $status = $rados->mon_command({ prefix => 'status' });
+
+$status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
+
+if (!$ceph_version < 15) {
+   $status->{monmap} = $rados->mon_command({ prefix => 'mon dump' });
+   $status->{mgrmap} = $rados->mon_command({ prefix => 'mgr dump' });
+}
+
+return $status;
+}
+
 1;
diff --git a/www/manager6/ceph/StatusDetail.js 
b/www/manager6/ceph/StatusDetail.js
index 8185e3bb..211b0d6f 100644
--- a/www/manager6/ceph

Re: [pve-devel] [PATCH manager 2/2] error message on failed config dump command

2020-06-02 Thread Alwin Antreich
On Tue, Jun 02, 2020 at 02:05:26PM +0200, Thomas Lamprecht wrote:
> On 5/28/20 4:41 PM, Alwin Antreich wrote:
> > Prior Ceph Nautilus the ceph config dump command was not available.
> > This patch provides a more meaningful info for the user.
> > 
> 
> what is the verbatim error message you get from ceph in that case?
> 
> As you're now assuming that any error is dump not available, even if
> it could be something totally different?
It said: __mon_command failed - command not known (500)__. I just want
to give a clearer message, not that the mon_command is not known.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] ceph: extend the pool view

2020-05-29 Thread Alwin Antreich
to add the pg_autoscale_mode since its activated in Ceph Octopus by
default and emmits a waring (ceph status) if a pool has too many PGs.
Also present a nicer formated output on the command line.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph.pm  | 25 +
 PVE/CLI/pveceph.pm| 24 ++--
 www/manager6/ceph/Pool.js | 19 +++
 3 files changed, 50 insertions(+), 18 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index fc4ee535..4d66c88a 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -605,9 +605,16 @@ __PACKAGE__->register_method ({
items => {
type => "object",
properties => {
-   pool => { type => 'integer' },
-   pool_name => { type => 'string' },
-   size => { type => 'integer' },
+   pool => { type => 'integer', title => 'ID' },
+   pool_name => { type => 'string', title => 'Name' },
+   size => { type => 'integer', title => 'Size' },
+   min_size => { type => 'integer', title => 'Min Size' },
+   pg_num => { type => 'integer', title => 'PG Num' },
+   pg_autoscale_mode => { type => 'string', optional => 1, title 
=> 'PG Autoscale Mode' },
+   crush_rule => { type => 'integer', title => 'Crush Rule' },
+   crush_rule_name => { type => 'string', title => 'Crush Rule 
Name' },
+   percent_used => { type => 'number', title => '%-Used' },
+   bytes_used => { type => 'integer', title => 'Used' },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -637,9 +644,19 @@ __PACKAGE__->register_method ({
}
 
my $data = [];
+   my $attr_list = [
+   'pool',
+   'pool_name',
+   'size',
+   'min_size',
+   'pg_num',
+   'crush_rule',
+   'pg_autoscale_mode',
+   ];
+
foreach my $e (@{$res->{pools}}) {
my $d = {};
-   foreach my $attr (qw(pool pool_name size min_size pg_num 
crush_rule)) {
+   foreach my $attr (@$attr_list) {
$d->{$attr} = $e->{$attr} if defined($e->{$attr});
}
 
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index eac3743a..eda3dfc2 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -170,16 +170,20 @@ our $cmddef = {
 init => [ 'PVE::API2::Ceph', 'init', [], { node => $nodename } ],
 pool => {
ls => [ 'PVE::API2::Ceph', 'lspools', [], { node => $nodename }, sub {
-   my $res = shift;
-
-   printf("%-20s %10s %10s %10s %10s %20s\n", "Name", "size", 
"min_size",
-   "pg_num", "%-used", "used");
-   foreach my $p (sort {$a->{pool_name} cmp $b->{pool_name}} @$res) {
-   printf("%-20s %10d %10d %10d %10.2f %20d\n", $p->{pool_name},
-   $p->{size}, $p->{min_size}, $p->{pg_num},
-   $p->{percent_used}, $p->{bytes_used});
-   }
-   }],
+   my ($data, $schema, $options) = @_;
+   PVE::CLIFormatter::print_api_result($data, $schema,
+   [
+   'pool_name',
+   'size',
+   'min_size',
+   'pg_num',
+   'pg_autoscale_mode',
+   'crush_rule_name',
+   'percent_used',
+   'bytes_used',
+   ],
+   $options);
+   }, $PVE::RESTHandler::standard_output_options],
create => [ 'PVE::API2::Ceph', 'createpool', ['name'], { node => 
$nodename }],
destroy => [ 'PVE::API2::Ceph', 'destroypool', ['name'], { node => 
$nodename } ],
 },
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index e81b5974..db1828a6 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -107,10 +107,21 @@ Ext.define('PVE.node.CephPoolList', {
dataIndex: 'size'
},
{
-   text: '# Placement Groups', // pg_num',
-   width: 180,
-   align: 'right',
-   dataIndex: 'pg_num'
+   text: 'Placement Groups',
+   columns: [
+   {
+   text: '# of PGs', // pg_num',
+   width: 100,
+   align: 'right',
+   dataIndex: 'pg_num'
+   },
+   {
+   text: 'Autoscale Mode',
+   width: 140,
+   align: 'right',
+   dataIndex: 'pg_autoscale_mode'
+   },
+   ]
},
{
text: 'CRUSH Rule',
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/2] error message on failed config dump command

2020-05-28 Thread Alwin Antreich
Prior Ceph Nautilus the ceph config dump command was not available.
This patch provides a more meaningful info for the user.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index afc1bdbd..fc4ee535 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -231,7 +231,8 @@ __PACKAGE__->register_method ({
PVE::Ceph::Tools::check_ceph_inited();
 
my $rados = PVE::RADOS->new();
-   my $res = $rados->mon_command( { prefix => 'config dump', format => 
'json' });
+   my $res = eval { $rados->mon_command( { prefix => 'config dump', format 
=> 'json' }) };
+   die "ceph config dump not available, $@\n" if $@;
foreach my $entry (@$res) {
$entry->{can_update_at_runtime} = $entry->{can_update_at_runtime}? 
1 : 0; # JSON::true/false -> 1/0
}
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/2] Make PVE6 compatible with supported ceph versions

2020-05-28 Thread Alwin Antreich
Luminous, Nautilus and Octopus. In Octopus the mon_status was dropped.
Also the ceph status was cleaned up and doesn't provide the mgrmap and
monmap.

The rados queries used in the ceph status API endpoints (cluster / node)
were factored out and merged to one place.

Signed-off-by: Alwin Antreich 
---
note: as discussed off-list with Dominik, the status API call could also
  be split into multiple API calls. To provide mgrmap, monmap and
  status separately.

 PVE/API2/Ceph.pm  |  5 +
 PVE/API2/Ceph/MON.pm  |  6 +++---
 PVE/API2/Ceph/OSD.pm  |  2 +-
 PVE/API2/Cluster/Ceph.pm  |  5 +
 PVE/Ceph/Tools.pm | 13 +
 www/manager6/ceph/StatusDetail.js |  7 ---
 6 files changed, 23 insertions(+), 15 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 85a04101..afc1bdbd 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -580,10 +580,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_inited();
 
-   my $rados = PVE::RADOS->new();
-   my $status = $rados->mon_command({ prefix => 'status' });
-   $status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
-   return $status;
+   return PVE::Ceph::Tools::ceph_cluster_status();
 }});
 
 __PACKAGE__->register_method ({
diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index 3baeac52..b33b8700 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -130,7 +130,7 @@ __PACKAGE__->register_method ({
my $monhash = PVE::Ceph::Services::get_services_info("mon", $cfg, 
$rados);
 
if ($rados) {
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
 
my $mons = $monstat->{monmap}->{mons};
foreach my $d (@$mons) {
@@ -338,7 +338,7 @@ __PACKAGE__->register_method ({
my $monsection = "mon.$monid";
 
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
my $monlist = $monstat->{monmap}->{mons};
my $monhash = PVE::Ceph::Services::get_services_info('mon', $cfg, 
$rados);
 
@@ -356,7 +356,7 @@ __PACKAGE__->register_method ({
# reopen with longer timeout
$rados = PVE::RADOS->new(timeout => 
PVE::Ceph::Tools::get_config('long_rados_timeout'));
$monhash = PVE::Ceph::Services::get_services_info('mon', $cfg, 
$rados);
-   $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   $monstat = $rados->mon_command({ prefix => 'quorum_status' });
$monlist = $monstat->{monmap}->{mons};
 
my $addr;
diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index a514c502..ceaed129 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -344,7 +344,7 @@ __PACKAGE__->register_method ({
 
# get necessary ceph infos
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
 
die "unable to get fsid\n" if !$monstat->{monmap} || 
!$monstat->{monmap}->{fsid};
my $fsid = $monstat->{monmap}->{fsid};
diff --git a/PVE/API2/Cluster/Ceph.pm b/PVE/API2/Cluster/Ceph.pm
index e18d421e..c0277221 100644
--- a/PVE/API2/Cluster/Ceph.pm
+++ b/PVE/API2/Cluster/Ceph.pm
@@ -142,10 +142,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_inited();
 
-   my $rados = PVE::RADOS->new();
-   my $status = $rados->mon_command({ prefix => 'status' });
-   $status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
-   return $status;
+   return PVE::Ceph::Tools::ceph_cluster_status();
 }
 });
 
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 3273c7d1..b4a83f2e 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -468,4 +468,17 @@ sub get_real_flag_name {
 return $flagmap->{$flag} // $flag;
 }
 
+sub ceph_cluster_status {
+my ($rados) = @_;
+$rados = PVE::RADOS->new() if !$rados;
+
+my $status = $rados->mon_command({ prefix => 'status' });
+
+$status->{health} = $rados->mon_command({ prefix => 'health', detail => 
'detail' });
+$status->{monmap} = $rados->mon_command({ prefix => 'mon dump' });
+$status->{mgrmap} = $rados->mon_command({ prefix => 'mgr dump' });
+
+return $status;
+}
+
 1;
diff --git a/www/manager6/ceph/StatusDetail.js 
b/www/manager6/ceph/StatusDetail.js
index 8185e3bb..6561eba3 100644
--- a/www/manager6

[pve-devel] [PATCH storage] Fix #2737: Can't call method "mode"

2020-05-13 Thread Alwin Antreich
on an undefined value at /usr/share/perl5/PVE/Storage/Plugin.pm line 928

This error message crops up when a file is deleted after getting the
file list and before the loop passed the file entry.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/Plugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index e9da403..cec136e 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -925,7 +925,7 @@ my $get_subdir_files = sub {
 
my $st = File::stat::stat($fn);
 
-   next if S_ISDIR($st->mode);
+   next if (!$st || S_ISDIR($st->mode));
 
my $info;
 
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] Fix: backup: relax file name matching regex

2020-05-12 Thread Alwin Antreich
The rework of the backup file detection logic missed the non-standard
file name case. This patch allows to restore backups with different file
names. Though the config extraction fails, since the type is unknown.

Signed-off-by: Alwin Antreich 
---
Note: This fixes the issue reported on the forum.
  
https://forum.proxmox.com/threads/proxmox-zst-backup-format-cant-be-restored-from-gui.69643/

 PVE/Storage.pm| 10 +++---
 test/archive_info_test.pm | 11 ++-
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 87550b1..2a8deaf 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1394,9 +1394,13 @@ sub archive_info {
 my $info;
 
 my $volid = basename($archive);
-if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?$/)
 {
-   $info = decompressor_info($2, $3);
-   $info->{type} = $1;
+if ($volid =~ 
/\.(tgz$|tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?$/) {
+   $info = decompressor_info($1, $2);
+   $info->{type} = 'unknown';
+
+   if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})/)
 {
+   $info->{type} = $1;
+   }
 } else {
die "ERROR: couldn't determine format and compression type\n";
 }
diff --git a/test/archive_info_test.pm b/test/archive_info_test.pm
index 283fe47..7db02d1 100644
--- a/test/archive_info_test.pm
+++ b/test/archive_info_test.pm
@@ -38,6 +38,16 @@ my $tests = [
'compression'  => 'gz',
},
 },
+{
+   description => 'Backup archive, none, tgz',
+   archive => "backup/whatever-the-name_is_here.tgz",
+   expected=> {
+   'type' => 'unknown',
+   'format'   => 'tar',
+   'decompressor' => ['tar', '-z'],
+   'compression'  => 'gz',
+   },
+},
 ];
 
 # add new compression fromats to test
@@ -88,7 +98,6 @@ my $non_bkp_suffix = {
 'openvz' => [ 'zip', 'tgz.lzo', 'tar.bz2', 'zip.gz', '', ],
 'lxc'=> [ 'zip', 'tgz.lzo', 'tar.bz2', 'zip.gz', '', ],
 'qemu'   => [ 'vma.xz', 'vms.gz', 'vmx.zst', '', ],
-'none'   => [ 'tar.gz', ],
 };
 
 # create tests for failed matches
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH i18n] update German translation

2020-05-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 de.po | 80 +--
 1 file changed, 39 insertions(+), 41 deletions(-)

diff --git a/de.po b/de.po
index babde6d..82722fc 100644
--- a/de.po
+++ b/de.po
@@ -460,7 +460,7 @@ msgstr "Base DN für Gruppen"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:16
 msgid "Base Domain Name"
-msgstr "Base Domain Name"
+msgstr "Basis Domänen Name"
 
 #: pve-manager/www/manager6/storage/LVMEdit.js:125
 msgid "Base storage"
@@ -479,14 +479,12 @@ msgid "Before Queue Filtering"
 msgstr ""
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:137
-#, fuzzy
 msgid "Bind Password"
-msgstr "Kennwort"
+msgstr "Bind Kennwort"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:130
-#, fuzzy
 msgid "Bind User"
-msgstr "Benutzer"
+msgstr "Bind Benutzer"
 
 #: pmg-gui/js/QuarantineView.js:29 pmg-gui/js/SpamContextMenu.js:37
 #: pmg-gui/js/SpamQuarantine.js:339 pmg-gui/js/UserBlackWhiteList.js:55
@@ -1054,7 +1052,7 @@ msgstr "Kopiere Information"
 
 #: pve-manager/www/manager6/dc/TokenEdit.js:162
 msgid "Copy Secret Value"
-msgstr ""
+msgstr "Kopiere geheimen Wert"
 
 #: proxmox-widget-toolkit/Utils.js:535
 msgid "Copy data"
@@ -1298,11 +1296,13 @@ msgstr "Default Relay"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:152
 msgid "Default Sync Options"
-msgstr ""
+msgstr "Standard-Sync Optionen"
 
 #: pve-manager/www/manager6/dc/SyncWindow.js:138
 msgid "Default sync options can be set by editing the realm."
 msgstr ""
+"Standard-Synchronisierungsoptionen können durch Bearbeiten des Realm "
+"festgelegt werden"
 
 #: pve-manager/www/manager6/dc/OptionView.js:152
 msgid "Defaults to origin"
@@ -1557,7 +1557,7 @@ msgstr "Keine gültige Cluster Information!"
 #: pve-manager/www/manager6/node/ACME.js:699
 #: pve-manager/www/manager6/storage/CIFSEdit.js:180
 msgid "Domain"
-msgstr "Domain"
+msgstr "Domäne"
 
 #: pve-manager/www/manager6/ceph/StatusDetail.js:52
 msgid "Down"
@@ -1578,7 +1578,7 @@ msgstr "Nur Unicast Adressen erlaubt"
 
 #: pve-manager/www/manager6/dc/CorosyncLinkEdit.js:230
 msgid "Duplicate link number not allowed."
-msgstr ""
+msgstr "Doppelt vergebene Link Nummern nicht erlaubt."
 
 #: pve-manager/www/manager6/grid/Replication.js:382
 msgid "Duration"
@@ -1609,9 +1609,8 @@ msgid "E-Mail addresses of '{0}'"
 msgstr "E-Mail Adressen von '{0}'"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:142
-#, fuzzy
 msgid "E-Mail attribute"
-msgstr "EMail Attribut Name(n)"
+msgstr "EMail Attribut"
 
 #: pve-manager/www/manager6/qemu/HDEfi.js:64
 #: pve-manager/www/manager6/qemu/HardwareView.js:253
@@ -1732,7 +1731,7 @@ msgstr "Aktivieren"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:231
 msgid "Enable new users"
-msgstr ""
+msgstr "Neue Benutzer aktivieren"
 
 #: pve-manager/www/manager6/lxc/MPEdit.js:269
 msgid "Enable quota"
@@ -2097,7 +2096,7 @@ msgstr "Ceph Cluster Konfiguration"
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:178
 #: pve-manager/www/manager6/dc/SyncWindow.js:102
 msgid "Full"
-msgstr ""
+msgstr "Voll"
 
 #: pve-manager/www/manager6/window/Clone.js:179
 msgid "Full Clone"
@@ -2158,9 +2157,8 @@ msgid "Group"
 msgstr "Gruppe"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:206
-#, fuzzy
 msgid "Group Filter"
-msgstr "Virenfilter"
+msgstr "Gruppenfilter"
 
 #: pve-manager/www/manager6/dc/ACLView.js:24
 #: pve-manager/www/manager6/dc/ACLView.js:209
@@ -2168,9 +2166,8 @@ msgid "Group Permission"
 msgstr "Gruppenrechte"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:193
-#, fuzzy
 msgid "Group classes"
-msgstr "Gruppen Objektklasse"
+msgstr "Gruppenklassen"
 
 #: pmg-gui/js/LDAPGroupEditor.js:69
 msgid "Group member"
@@ -2182,7 +2179,7 @@ msgstr "Gruppen Objektklasse"
 
 #: pve-manager/www/manager6/dc/AuthEditLDAP.js:148
 msgid "Groupname attr."
-msgstr ""
+msgstr "Gruppenname Attr."
 
 #: pmg-gui/js/LDAPConfig.js:610 pve-manager/www/manager6/dc/AuthEditLDAP.js:164
 #: pve-manager/www/manager6/dc/Config.js:101
@@ -2316,7 +2313,7 @@ msgstr "Interne Hosts verstecken"
 
 #: pve-manager/www/manager6/dc/ACMEPluginEdit.js:198
 msgid "Hint"
-msgstr ""
+msgstr "Hinweis"
 
 #: pve-manager/www/manager6/lxc/Options.js:145
 #: pve-manager/www/manager6/qemu/Options.js:322
@@ -2498,7 +2495,7 @@ msgstr "Eingehende Mails"
 
 #: pmg-

[pve-devel] [PATCH v3 docs] add section about backup compression algorithms

2020-05-07 Thread Alwin Antreich
as a short description about the different algorithms in use by vzdump.

Signed-off-by: Alwin Antreich 
---
v1 -> v2:
* add Aaron's suggestions

v2 -> v3:
* remove 'And' on sentence beginning, as Aaron's preference, ;)
  otherwise the same as v2

 vzdump.adoc | 35 ++-
 1 file changed, 34 insertions(+), 1 deletion(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index 404ad09..1c39680 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -147,6 +147,39 @@ That way it is possible to store several backup in the same
 directory. The parameter `maxfiles` can be used to specify the
 maximum number of backups to keep.
 
+Backup File Compression
+---
+
+The backup file can be compressed with one of the following algorithms: `lzo`
+footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
+https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
+based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
+footnote:[Zstandard a lossless data compression algorithm
+https://en.wikipedia.org/wiki/Zstandard].
+
+Currently, Zstandard (zstd) is the fastest of these three algorithms.
+Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
+are more widely used and often installed by default.
+
+You can install pigz footnote:[pigz - parallel implementation of gzip
+https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
+performance due to multi-threading. For pigz & zstd, the amount of
+threads/cores can be adjusted. See the
+xref:vzdump_configuration[configuration options] below.
+
+The extension of the backup file name can usually be used to determine which
+compression algorithm has been used to create the backup.
+
+|===
+|.zst | Zstandard (zstd) compression
+|.gz or .tgz | gzip compression
+|.lzo | lzo compression
+|===
+
+If the backup file name doesn't end with one of the above file extensions, then
+it was not compressed by vzdump.
+
+
 [[vzdump_restore]]
 Restore
 ---
@@ -203,7 +236,7 @@ per configured storage, this can be done with:
 # pvesm set STORAGEID --bwlimit restore=KIBs
 
 
-
+[[vzdump_configuration]]
 Configuration
 -
 
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 docs] add section about backup compression algorithms

2020-05-07 Thread Alwin Antreich
as a short description about the different algorithms in use by vzdump.

Signed-off-by: Alwin Antreich 
---
v1 -> v2:
* incorporate Aaron's suggestions
  https://pve.proxmox.com/pipermail/pve-devel/2020-May/043481.html

 vzdump.adoc | 35 ++-
 1 file changed, 34 insertions(+), 1 deletion(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index 404ad09..bc1f4a9 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -147,6 +147,39 @@ That way it is possible to store several backup in the same
 directory. The parameter `maxfiles` can be used to specify the
 maximum number of backups to keep.
 
+Backup File Compression
+---
+
+The backup file can be compressed with one of the following algorithms: `lzo`
+footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
+https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
+based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
+footnote:[Zstandard a lossless data compression algorithm
+https://en.wikipedia.org/wiki/Zstandard].
+
+Currently, Zstandard (zstd) is the fastest of these three algorithms. And
+multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
+are more widely used and often installed by default.
+
+You can install pigz footnote:[pigz - parallel implementation of gzip
+https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
+performance due to multi-threading. For pigz & zstd, the amount of
+threads/cores can be adjusted. See the
+xref:vzdump_configuration[configuration options] below.
+
+The extension of the backup file name can usually be used to determine which
+compression algorithm has been used to create the backup.
+
+|===
+|.zst | Zstandard (zstd) compression
+|.gz or .tgz | gzip compression
+|.lzo | lzo compression
+|===
+
+If the backup file name doesn't end with one of the above file extensions, then
+it was not compressed by vzdump.
+
+
 [[vzdump_restore]]
 Restore
 ---
@@ -203,7 +236,7 @@ per configured storage, this can be done with:
 # pvesm set STORAGEID --bwlimit restore=KIBs
 
 
-
+[[vzdump_configuration]]
 Configuration
 -
 
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] add section about backup compression algorithms

2020-05-07 Thread Alwin Antreich
as a short description about the different algorithms in use by vzdump.

Signed-off-by: Alwin Antreich 
---
 vzdump.adoc | 36 +++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/vzdump.adoc b/vzdump.adoc
index 404ad09..3fd9ca9 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -147,6 +147,40 @@ That way it is possible to store several backup in the same
 directory. The parameter `maxfiles` can be used to specify the
 maximum number of backups to keep.
 
+Backup File Compression
+---
+
+The backup file can be compressed with the following algorithms: lzo
+footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
+https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], gzip footnote:[gzip -
+based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or zstd
+footnote:[Zstandard a lossless data compression algorithm
+https://en.wikipedia.org/wiki/Zstandard].
+
+Currently, Zstandard (zstd) is the fastest of these three algorithms. It also
+has the advantage of being multi-threaded. While lzo and gzip are not. In
+contrast to zstd, lzop and gzip are more widely used and are often installed by
+default.
+
+You can install pigz footnote:[pigz - parallel implementation of gzip
+https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
+performance due to multi-threading. For pigz & zstd, the amount of
+threads/cores can be adjusted, see the
+xref:vzdump_configuration[configuration options] below.
+
+Through the backup file ending, you can usually determine what compression
+format was used during backup.
+
+|===
+|.zst | Zstandard (zstd) compression
+|.gz or .tgz | gzip compression
+|.lzo | lzo compression
+|===
+
+If the backup file doesn't end with one of the above file extensions, then the
+backup was not compressed by vzdump.
+
+
 [[vzdump_restore]]
 Restore
 ---
@@ -203,7 +237,7 @@ per configured storage, this can be done with:
 # pvesm set STORAGEID --bwlimit restore=KIBs
 
 
-
+[[vzdump_configuration]]
 Configuration
 -
 
-- 
2.26.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2] Fix #1210: ceph: extend pveceph purge

2020-05-05 Thread Alwin Antreich
to clean service directories as well as disable and stop Ceph services.
Addtionally provide the option to remove crash and log information.

This patch is also in addtion to #2607, as the current cleanup doesn't
allow to re-configure Ceph, without manual steps during purge.

Signed-off-by: Alwin Antreich 
---
v1 -> v2:
* incorporate Thomas suggestions. Thanks.
- add warning for failed ceph connection
- use grep instead of map
- change $ceph variable name to $service in purge methods

 PVE/CLI/pveceph.pm | 48 ++-
 PVE/Ceph/Tools.pm  | 71 ++
 2 files changed, 100 insertions(+), 19 deletions(-)

diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 064ae545..448d3ec1 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -18,6 +18,7 @@ use PVE::Storage;
 use PVE::Tools qw(run_command);
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::Ceph::Tools;
+use PVE::Ceph::Services;
 use PVE::API2::Ceph;
 use PVE::API2::Ceph::FS;
 use PVE::API2::Ceph::MDS;
@@ -49,25 +50,58 @@ __PACKAGE__->register_method ({
 parameters => {
additionalProperties => 0,
properties => {
+   logs => {
+   description => 'Additionally purge Ceph logs, /var/log/ceph.',
+   type => 'boolean',
+   optional => 1,
+   },
+   crash => {
+   description => 'Additionally purge Ceph crash logs, 
/var/lib/ceph/crash.',
+   type => 'boolean',
+   optional => 1,
+   },
},
 },
 returns => { type => 'null' },
 code => sub {
my ($param) = @_;
 
-   my $monstat;
+   my $message;
+   my $pools = [];
+   my $monstat = {};
+   my $mdsstat = {};
+   my $osdstat = [];
 
eval {
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   $pools = PVE::Ceph::Tools::ls_pools(undef, $rados);
+   $monstat = PVE::Ceph::Services::get_services_info('mon', undef, 
$rados);
+   $mdsstat = PVE::Ceph::Services::get_services_info('mds', undef, 
$rados);
+   $osdstat = $rados->mon_command({ prefix => 'osd metadata' });
};
-   my $err = $@;
+   warn "Could not connect: $@" if $@;
+
+   my $osd = grep { $_->{hostname} eq $nodename } @$osdstat;
+   my $mds = grep { $mdsstat->{$_}->{host} eq $nodename } keys %$mdsstat;
+   my $mon = grep { $monstat->{$_}->{host} eq $nodename } keys %$monstat;
+
+   # no pools = no data
+   $message .= "- remove pools, this will !!DESTROY DATA!!\n" if @$pools;
+   $message .= "- remove active OSD on $nodename\n" if $osd;
+   $message .= "- remove active MDS on $nodename\n" if $mds;
+   $message .= "- remove other MONs, $nodename is not the last MON\n"
+   if scalar(keys %$monstat) > 1 && $mon;
+
+   # display all steps at once
+   die "Unable to purge Ceph!\n\nTo continue:\n$message" if $message;
 
-   die "detected running ceph services- unable to purge data\n"
-   if !$err;
+   my $services = PVE::Ceph::Services::get_local_services();
+   $services->{mon} = $monstat if $mon;
+   $services->{crash}->{$nodename} = { direxists => 1 } if $param->{crash};
+   $services->{logs}->{$nodename} = { direxists => 1 } if $param->{logs};
 
-   # fixme: this is dangerous - should we really support this function?
-   PVE::Ceph::Tools::purge_all_ceph_files();
+   PVE::Ceph::Tools::purge_all_ceph_services($services);
+   PVE::Ceph::Tools::purge_all_ceph_files($services);
 
return undef;
 }});
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index e6225b78..acec746a 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -11,6 +11,8 @@ use JSON;
 use PVE::Tools qw(run_command dir_glob_foreach);
 use PVE::Cluster qw(cfs_read_file);
 use PVE::RADOS;
+use PVE::Ceph::Services;
+use PVE::CephConfig;
 
 my $ccname = 'ceph'; # ceph cluster name
 my $ceph_cfgdir = "/etc/ceph";
@@ -42,6 +44,7 @@ my $config_hash = {
 ceph_bootstrap_mds_keyring => $ceph_bootstrap_mds_keyring,
 ceph_mds_data_dir => $ceph_mds_data_dir,
 long_rados_timeout => 60,
+ceph_cfgpath => $ceph_cfgpath,
 };
 
 sub get_local_version {
@@ -89,20 +92,64 @@ sub get_config {
 }
 
 sub purge_all_ceph_files {
-# fixme: this is very dangerous - should we really support this function?
-
-unlink $ceph_cfgpath;
-
-unlink $pve_ceph_cfgpath;
-unlink $pve_ckeyring_path;
-unlink $pve_mon_key_path;
-
-unlink $ceph_bootstrap_osd_keyring;
-unlink $ceph_bootstrap_mds_keyring;
+my ($services) = @_;
+my $is_local_mon;
+my $monlist = [ split(',', 
PVE:

[pve-devel] [PATCH manager] ceph: extend pveceph purge

2020-05-03 Thread Alwin Antreich
to clean service directories as well as disable and stop Ceph services.
Addtionally provide the option to remove crash and log information.

This patch is in addtion to #2607, as the current cleanup doesn't allow
to re-configure Ceph, without manual steps during purge.

Signed-off-by: Alwin Antreich 
---
 PVE/CLI/pveceph.pm | 47 +-
 PVE/Ceph/Tools.pm  | 71 ++
 2 files changed, 99 insertions(+), 19 deletions(-)

diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 064ae545..e77cca2b 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -18,6 +18,7 @@ use PVE::Storage;
 use PVE::Tools qw(run_command);
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::Ceph::Tools;
+use PVE::Ceph::Services;
 use PVE::API2::Ceph;
 use PVE::API2::Ceph::FS;
 use PVE::API2::Ceph::MDS;
@@ -49,25 +50,57 @@ __PACKAGE__->register_method ({
 parameters => {
additionalProperties => 0,
properties => {
+   logs => {
+   description => 'Additionally purge Ceph logs, /var/log/ceph.',
+   type => 'boolean',
+   optional => 1,
+   },
+   crash => {
+   description => 'Additionally purge Ceph crash logs, 
/var/lib/ceph/crash.',
+   type => 'boolean',
+   optional => 1,
+   },
},
 },
 returns => { type => 'null' },
 code => sub {
my ($param) = @_;
 
-   my $monstat;
+   my $message;
+   my $pools = [];
+   my $monstat = {};
+   my $mdsstat = {};
+   my $osdstat = [];
 
eval {
my $rados = PVE::RADOS->new();
-   my $monstat = $rados->mon_command({ prefix => 'mon_status' });
+   $pools = PVE::Ceph::Tools::ls_pools(undef, $rados);
+   $monstat = PVE::Ceph::Services::get_services_info('mon', undef, 
$rados);
+   $mdsstat = PVE::Ceph::Services::get_services_info('mds', undef, 
$rados);
+   $osdstat = $rados->mon_command({ prefix => 'osd metadata' });
};
-   my $err = $@;
 
-   die "detected running ceph services- unable to purge data\n"
-   if !$err;
+   my $osd = map { $_->{hostname} eq $nodename ? 1 : () } @$osdstat;
+   my $mds = map { $mdsstat->{$_}->{host} eq $nodename ? 1 : () } keys 
%$mdsstat;
+   my $mon = map { $monstat->{$_}->{host} eq $nodename ? 1 : () } keys 
%$monstat;
+
+   # no pools = no data
+   $message .= "- remove pools, this will !!DESTROY DATA!!\n" if @$pools;
+   $message .= "- remove active OSD on $nodename\n" if $osd;
+   $message .= "- remove active MDS on $nodename\n" if $mds;
+   $message .= "- remove other MONs, $nodename is not the last MON\n"
+   if scalar(keys %$monstat) > 1 && $mon;
+
+   # display all steps at once
+   die "Unable to purge Ceph!\n\nTo continue:\n$message" if $message;
+
+   my $services = PVE::Ceph::Services::get_local_services();
+   $services->{mon} = $monstat if $mon;
+   $services->{crash}->{$nodename} = { direxists => 1 } if $param->{crash};
+   $services->{logs}->{$nodename} = { direxists => 1 } if $param->{logs};
 
-   # fixme: this is dangerous - should we really support this function?
-   PVE::Ceph::Tools::purge_all_ceph_files();
+   PVE::Ceph::Tools::purge_all_ceph_services($services);
+   PVE::Ceph::Tools::purge_all_ceph_files($services);
 
return undef;
 }});
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index e6225b78..09d22d36 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -11,6 +11,8 @@ use JSON;
 use PVE::Tools qw(run_command dir_glob_foreach);
 use PVE::Cluster qw(cfs_read_file);
 use PVE::RADOS;
+use PVE::Ceph::Services;
+use PVE::CephConfig;
 
 my $ccname = 'ceph'; # ceph cluster name
 my $ceph_cfgdir = "/etc/ceph";
@@ -42,6 +44,7 @@ my $config_hash = {
 ceph_bootstrap_mds_keyring => $ceph_bootstrap_mds_keyring,
 ceph_mds_data_dir => $ceph_mds_data_dir,
 long_rados_timeout => 60,
+ceph_cfgpath => $ceph_cfgpath,
 };
 
 sub get_local_version {
@@ -89,20 +92,64 @@ sub get_config {
 }
 
 sub purge_all_ceph_files {
-# fixme: this is very dangerous - should we really support this function?
-
-unlink $ceph_cfgpath;
-
-unlink $pve_ceph_cfgpath;
-unlink $pve_ckeyring_path;
-unlink $pve_mon_key_path;
-
-unlink $ceph_bootstrap_osd_keyring;
-unlink $ceph_bootstrap_mds_keyring;
+my ($services) = @_;
+my $is_local_mon;
+my $monlist = [ split(',', 
PVE::CephConfig::get_monaddr_list($pve_ceph_cfgpath)) ];
+
+foreach my $ceph (keys %$services) {
+   my $type = $services->{$ceph};
+   next if (!%$type);
+
+   foreach my $name (keys %$type) {
+   my $dir_exists = $type->

Re: [pve-devel] [PATCH docs] add documenation for ldap syncing

2020-04-30 Thread Alwin Antreich
My suggestions inline.

On Thu, Apr 30, 2020 at 01:14:27PM +0200, Dominik Csapak wrote:
> explaining the main Requirements and limitations, as well as the
> most important sync options
> 
> Signed-off-by: Dominik Csapak 
> ---
>  pveum.adoc | 47 +++
>  1 file changed, 47 insertions(+)
> 
> diff --git a/pveum.adoc b/pveum.adoc
> index c89d4b8..5881fa9 100644
> --- a/pveum.adoc
> +++ b/pveum.adoc
> @@ -170,6 +170,53 @@ A server and authentication domain need to be specified. 
> Like with
>  ldap an optional fallback server, optional port, and SSL
>  encryption can be configured.
>  
> +[[pveum_ldap_sync]]
> +Syncing LDAP-based realms
> +~
> +
> +It is possible to sync users and groups for ldap based realms using
s/ldap/LDAP

> +  pveum sync 
> +or in the `Authentication` panel of the GUI to the user.cfg.
> +
> +Requirements and limitations
> +
> +
> +The `bind_dn` will be used to query the users and groups, so this account
> +should be able to see all desired entries.
s/will be/is/

> +
> +The names of the users and groups (configurable via `user_attr` and
> +`group_name_attr` respectively) have to adhere to the limitations of usual
> +users and groups in the config.
For me, this is hard to read. It may be better in two sentences. And
what does it mean, adhere to the limitations?

eg:
The user and group names have to adhere to the limitation of the
configuration.  Configurable via `user_attr` and `group_name_attr`
respectively.

> +
> +Groups will be synced with `-$realm` attached to the name, to avoid naming
s/will be/are/

> +conflicts. Please make sure that a sync does not overwrite manually created
> +groups.
> +
> +Options
> +^^^
> +
> +The main options for syncing are:
> +
> +* `dry-run`: No data will actually be synced. This is useful if you want to
> +  see which users and groups would get synced to the user.cfg. This is set
> +  when you click `Preview` in the GUI.
s/will actually/is/

> +
> +* `enable-new`: If set, the newly synced users are enabled and can login.
> +  The default is `true`.
> +
> +* `full`: If set, the sync usses the LDAP Directory as source of truth,
s/usses/uses/
s/as source/as a source/

> +  overwriting information set manually in the user.cfg and deleting users
> +  and groups which were not returned. If not set, only new data
s/were not returned/are not returned/

> +  will be written to the config, and no stale users will be deleted.
s/will be/is/

> +
> +* `purge`: If set, sync removes all corresponding ACLs when removing users
> +  and groups. This is only useful with the option `full`.
> +
> +* `scope`: The scope of what to sync. Can be either `users`, `groups` or
s/Can be/It can be/

> +  `both`.
> +
> +These options either to be set either as parameters, or as defaults, via the
These options are either set as parameters or as defaults, via the

> +realm option `sync-defaults-options`.
>  
>  [[pveum_tfa_auth]]
>  Two-factor authentication
> -- 
> 2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 04/12] test: list_volumes

2020-04-28 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Co-Authored-by: Dominic Jaeger 
Signed-off-by: Alwin Antreich 
---
 test/list_volumes_test.pm | 519 ++
 test/run_plugin_tests.pl  |   6 +-
 2 files changed, 524 insertions(+), 1 deletion(-)
 create mode 100644 test/list_volumes_test.pm

diff --git a/test/list_volumes_test.pm b/test/list_volumes_test.pm
new file mode 100644
index 000..a215617
--- /dev/null
+++ b/test/list_volumes_test.pm
@@ -0,0 +1,519 @@
+package PVE::Storage::TestListVolumes;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use PVE::Cluster;
+use PVE::Tools qw(run_command);
+
+use Test::More;
+use Test::MockModule;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+use File::stat qw();
+use File::Temp;
+use Storable qw(dclone);
+
+use constant DEFAULT_SIZE => 131072; # 128 kiB
+use constant DEFAULT_USED => 262144; # 256 kiB
+use constant DEFAULT_CTIME => 1234567890;
+
+# get_vmlist() return values
+my $mocked_vmlist = {
+'version' => 1,
+'ids' => {
+   '16110' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 4,
+   },
+   '16112' => {
+   'node'=> 'x42',
+   'type'=> 'lxc',
+   'version' => 7,
+   },
+   '16114' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 2,
+   },
+   '16113' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 5,
+   },
+   '16115' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 1,
+   },
+   '9004' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 6,
+   }
+}
+};
+
+my $storage_dir = File::Temp->newdir();
+my $scfg = {
+'type' => 'dir',
+'maxfiles' => 0,
+'path' => $storage_dir,
+'shared'   => 0,
+'content'  => {
+   'iso'  => 1,
+   'rootdir'  => 1,
+   'vztmpl'   => 1,
+   'images'   => 1,
+   'snippets' => 1,
+   'backup'   => 1,
+},
+};
+
+# The test cases are comprised of an arry of hashes with the following keys:
+# description => displayed on error by Test::More
+# vmid=> used for image matches by list_volume
+# files   => array of files for qemu-img to create
+# expected=> returned result hash
+#(content, ctime, format, parent, size, used, vimd, volid)
+my @tests = (
+{
+   description => 'VMID: 16110, VM, qcow2, backup, snippets',
+   vmid => '16110',
+   files => [
+   "$storage_dir/images/16110/vm-16110-disk-0.qcow2",
+   "$storage_dir/images/16110/vm-16110-disk-1.raw",
+   "$storage_dir/images/16110/vm-16110-disk-2.vmdk",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   "$storage_dir/snippets/userconfig.yaml",
+   "$storage_dir/snippets/hookscript.pl",
+   ],
+   expected => [
+   {
+   'content' => 'images',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'qcow2',
+   'parent'  => undef,
+   'size'=> DEFAULT_SIZE,
+   'used'=> DEFAULT_USED,
+   'vmid'=> '16110',
+   'volid'   => 'local:16110/vm-16110-disk-0.qcow2',
+   },
+   {
+   'content' => 'images',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'raw',
+   'parent'  => undef,
+   'size'=> DEFAULT_SIZE,
+   'used'=> DEFAULT_USED,
+   'vmid'=> '16110',
+   'volid'   => 'local:16110/vm-16110-disk-1.raw',
+   },
+   {
+   'content' => 'images',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'vmdk',
+   'parent'  => undef,
+   'size'=> DEFAULT_SIZE,
+   'used'=> DEFAULT_USED,
+   'vmid'=> '16110',
+   'volid'   => 'local:16110/vm-16110-disk-2.vmdk',
+   },
+   {
+   'content' => 'backup',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'vma.gz',
+   'size'=> DEFAULT_SIZE,
+   'vmid'=> '16110',
+   'volid'   => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
+

[pve-devel] [PATCH storage v5 12/12] test: filesystem_path

2020-04-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 test/filesystem_path_test.pm | 91 
 test/run_plugin_tests.pl |  1 +
 2 files changed, 92 insertions(+)
 create mode 100644 test/filesystem_path_test.pm

diff --git a/test/filesystem_path_test.pm b/test/filesystem_path_test.pm
new file mode 100644
index 000..c1b6d90
--- /dev/null
+++ b/test/filesystem_path_test.pm
@@ -0,0 +1,91 @@
+package PVE::Storage::TestFilesystemPath;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my $path = '/some/path';
+
+# each array entry is a test that consists of the following keys:
+# volname  => image name that is passed to parse_volname
+# snapname => to test the die condition
+# expected => the array of return values; or the die message
+my $tests = [
+{
+   volname  => '1234/vm-1234-disk-0.raw',
+   snapname => undef,
+   expected => [
+   "$path/images/1234/vm-1234-disk-0.raw",
+   '1234',
+   'images'
+   ],
+},
+{
+   volname  => '1234/vm-1234-disk-0.raw',
+   snapname => 'my_snap',
+   expected => "can't snapshot this image format\n"
+},
+{
+   volname  => '1234/vm-1234-disk-0.qcow2',
+   snapname => undef,
+   expected => [
+   "$path/images/1234/vm-1234-disk-0.qcow2",
+   '1234',
+   'images'
+   ],
+},
+{
+   volname  => '1234/vm-1234-disk-0.qcow2',
+   snapname => 'my_snap',
+   expected => [
+   "$path/images/1234/vm-1234-disk-0.qcow2",
+   '1234',
+   'images'
+   ],
+},
+{
+   volname  => 'iso/my-awesome-proxmox.iso',
+   snapname => undef,
+   expected => [
+   "$path/template/iso/my-awesome-proxmox.iso",
+   undef,
+   'iso'
+   ],
+},
+{
+   volname  => "backup/vzdump-qemu-1234-2020_03_30-21_12_40.vma",
+   snapname => undef,
+   expected => [
+   "$path/dump/vzdump-qemu-1234-2020_03_30-21_12_40.vma",
+   1234,
+   'backup'
+   ],
+},
+];
+
+plan tests => scalar @$tests;
+
+foreach my $tt (@$tests) {
+my $volname = $tt->{volname};
+my $snapname = $tt->{snapname};
+my $expected = $tt->{expected};
+my $scfg = { path => $path };
+my $got;
+
+eval {
+   $got = [ PVE::Storage::Plugin->filesystem_path($scfg, $volname, 
$snapname) ];
+};
+$got = $@ if $@;
+
+is_deeply($got, $expected, "wantarray: filesystem_path for $volname")
+|| diag(explain($got));
+
+}
+
+done_testing();
+
+1;
diff --git a/test/run_plugin_tests.pl b/test/run_plugin_tests.pl
index 9e427eb..e29fc88 100755
--- a/test/run_plugin_tests.pl
+++ b/test/run_plugin_tests.pl
@@ -12,6 +12,7 @@ my $res = $harness->runtests(
 "list_volumes_test.pm",
 "path_to_volume_id_test.pm",
 "get_subdir_test.pm",
+"filesystem_path_test.pm",
 );
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 06/12] test: path_to_volume_id

2020-04-28 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/path_to_volume_id_test.pm | 242 +
 test/run_plugin_tests.pl   |   2 +-
 2 files changed, 243 insertions(+), 1 deletion(-)
 create mode 100644 test/path_to_volume_id_test.pm

diff --git a/test/path_to_volume_id_test.pm b/test/path_to_volume_id_test.pm
new file mode 100644
index 000..744c3ee
--- /dev/null
+++ b/test/path_to_volume_id_test.pm
@@ -0,0 +1,242 @@
+package PVE::Storage::TestPathToVolumeId;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+
+use Test::More;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+use File::Temp;
+
+my $storage_dir = File::Temp->newdir();
+my $scfg = {
+'digest' => 'd29306346b8b25b90a4a96165f1e8f52d1af1eda',
+'ids'=> {
+   'local' => {
+   'shared'   => 0,
+   'path' => "$storage_dir",
+   'type' => 'dir',
+   'maxfiles' => 0,
+   'content'  => {
+   'snippets' => 1,
+   'rootdir'  => 1,
+   'images'   => 1,
+   'iso'  => 1,
+   'backup'   => 1,
+   'vztmpl'   => 1,
+   },
+   },
+},
+'order' => {
+   'local' => 1,
+},
+};
+
+# the tests array consists of hashes with the following keys:
+# description => to identify the test case
+# volname => to create the test file
+# expected=> the result that path_to_volume_id should return
+my @tests = (
+{
+   description => 'Image, qcow2',
+   volname => "$storage_dir/images/16110/vm-16110-disk-0.qcow2",
+   expected=> [
+   'images',
+   'local:16110/vm-16110-disk-0.qcow2',
+   ],
+},
+{
+   description => 'Image, raw',
+   volname => "$storage_dir/images/16112/vm-16112-disk-0.raw",
+   expected=> [
+   'images',
+   'local:16112/vm-16112-disk-0.raw',
+   ],
+},
+{
+   description => 'Image template, qcow2',
+   volname => "$storage_dir/images/9004/base-9004-disk-0.qcow2",
+   expected=> [
+   'images',
+   'local:9004/base-9004-disk-0.qcow2',
+   ],
+},
+
+{
+   description => 'Backup, vma.gz',
+   volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
+   ],
+},
+{
+   description => 'Backup, vma.lzo',
+   volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo',
+   ],
+},
+{
+   description => 'Backup, vma',
+   volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
+   ],
+},
+{
+   description => 'Backup, tar.lzo',
+   volname => 
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo',
+   ],
+},
+
+{
+   description => 'ISO file',
+   volname => 
"$storage_dir/template/iso/yet-again-a-installation-disk.iso",
+   expected=> [
+   'iso',
+   'local:iso/yet-again-a-installation-disk.iso',
+   ],
+},
+{
+   description => 'CT template, tar.gz',
+   volname => 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz",
+   expected=> [
+   'vztmpl',
+   'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz',
+   ],
+},
+
+{
+   description => 'Rootdir',
+   volname => "$storage_dir/private/1234/", # fileparse needs / at the 
end
+   expected=> [
+   'rootdir',
+   'local:rootdir/1234',
+   ],
+},
+{
+   description => 'Rootdir, folder subvol',
+   volname => "$storage_dir/images/1234/subvol-1234-disk-0.subvol/", # 
fileparse needs / at the end
+   expected=> [
+   'images',
+   'local:1234/subvol-1234-disk-0.subvol'
+   ],
+},
+
+# no matches
+{
+   description => 'Snippets, yaml',
+   volname => "$storage_dir/snippets/userconfig.yaml",
+   expected => [''],
+},
+{
+   description => 'Snippets, hookscript',
+   volname => "

[pve-devel] [PATCH storage v5 10/12] Fix: #2124 storage: add zstd support

2020-04-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm |  4 +++-
 PVE/Storage/Plugin.pm  |  2 +-
 test/archive_info_test.pm  |  4 +++-
 test/list_volumes_test.pm  | 18 ++
 test/parse_volname_test.pm |  6 +++---
 test/path_to_volume_id_test.pm | 16 
 6 files changed, 44 insertions(+), 6 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0b2745e..87550b1 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1366,10 +1366,12 @@ sub decompressor_info {
tar => {
gz => ['tar', '-z'],
lzo => ['tar', '--lzop'],
+   zst => ['tar', '--zstd'],
},
vma => {
gz => ['zcat'],
lzo => ['lzop', '-d', '-c'],
+   zst => ['zstd', '-q', '-d', '-c'],
},
 };
 
@@ -1460,7 +1462,7 @@ sub extract_vzdump_config_vma {
my $errstring;
my $err = sub {
my $output = shift;
-   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/) {
+   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error : Broken 
pipe/) {
$broken_pipe = 1;
} elsif (!defined ($errstring) && $output !~ m/^\s*$/) {
$errstring = "Failed to extract config from VMA archive: 
$output\n";
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 5f3e4c1..e9da403 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -18,7 +18,7 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
-use constant COMPRESSOR_RE => 'gz|lzo';
+use constant COMPRESSOR_RE => 'gz|lzo|zst';
 
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
diff --git a/test/archive_info_test.pm b/test/archive_info_test.pm
index c9bb1b7..283fe47 100644
--- a/test/archive_info_test.pm
+++ b/test/archive_info_test.pm
@@ -45,10 +45,12 @@ my $decompressor = {
 tar => {
gz  => ['tar', '-z'],
lzo => ['tar', '--lzop'],
+   zst => ['tar', '--zstd'],
 },
 vma => {
gz  => ['zcat'],
lzo => ['lzop', '-d', '-c'],
+   zst => ['zstd', '-q', '-d', '-c'],
 },
 };
 
@@ -85,7 +87,7 @@ foreach my $virt (keys %$bkp_suffix) {
 my $non_bkp_suffix = {
 'openvz' => [ 'zip', 'tgz.lzo', 'tar.bz2', 'zip.gz', '', ],
 'lxc'=> [ 'zip', 'tgz.lzo', 'tar.bz2', 'zip.gz', '', ],
-'qemu'   => [ 'vma.xz', 'vms.gz', '', ],
+'qemu'   => [ 'vma.xz', 'vms.gz', 'vmx.zst', '', ],
 'none'   => [ 'tar.gz', ],
 };
 
diff --git a/test/list_volumes_test.pm b/test/list_volumes_test.pm
index 941b903..efcb547 100644
--- a/test/list_volumes_test.pm
+++ b/test/list_volumes_test.pm
@@ -93,6 +93,7 @@ my @tests = (
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma.zst",
"$storage_dir/snippets/userconfig.yaml",
"$storage_dir/snippets/hookscript.pl",
],
@@ -151,6 +152,14 @@ my @tests = (
'vmid'=> '16110',
'volid'   => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
},
+   {
+   'content' => 'backup',
+   'ctime'   => 1585595635,
+   'format'  => 'vma.zst',
+   'size'=> DEFAULT_SIZE,
+   'vmid'=> '16110',
+   'volid'   => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma.zst',
+   },
{
'content' => 'snippets',
'ctime'   => DEFAULT_CTIME,
@@ -174,6 +183,7 @@ my @tests = (
"$storage_dir/images/16112/vm-16112-disk-0.raw",
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_49_30.tar.gz",
+   "$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_49_30.tar.zst",
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_59_30.tgz",
],
expected => [
@@ -203,6 +213,14 @@ my @tests = (
'vmid'=> '16112',
'volid'   => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_49_30.tar.gz',
},
+   {
+   'content' => 'backup',
+   'ctime'   => 1585597770,
+   'format'  => 'tar.zst',
+   'size'=> DEFAULT_SIZE,
+   'vmid'=> '16112',
+   'volid'   => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_49_30.tar.zst',
+   },
{
'content

[pve-devel] [PATCH storage v5 07/12] Fix: path_to_volume_id returned wrong content

2020-04-28 Thread Alwin Antreich
type for backup files. Patch includes changes of the test as well.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm | 2 +-
 test/path_to_volume_id_test.pm | 8 
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index bdd6ebc..1ef5ed2 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -536,7 +536,7 @@ sub path_to_volume_id {
return ('rootdir', "$sid:rootdir/$vmid");
} elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
my $name = $1;
-   return ('iso', "$sid:backup/$name");
+   return ('backup', "$sid:backup/$name");
}
 }
 
diff --git a/test/path_to_volume_id_test.pm b/test/path_to_volume_id_test.pm
index 744c3ee..7d69869 100644
--- a/test/path_to_volume_id_test.pm
+++ b/test/path_to_volume_id_test.pm
@@ -72,7 +72,7 @@ my @tests = (
description => 'Backup, vma.gz',
volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
],
 },
@@ -80,7 +80,7 @@ my @tests = (
description => 'Backup, vma.lzo',
volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo',
],
 },
@@ -88,7 +88,7 @@ my @tests = (
description => 'Backup, vma',
volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
],
 },
@@ -96,7 +96,7 @@ my @tests = (
description => 'Backup, tar.lzo',
volname => 
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo',
],
 },
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 09/12] backup: compact regex for backup file filter

2020-04-28 Thread Alwin Antreich
the more compact form of the regex should allow easier addition of new
file extensions.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 4 ++--
 PVE/Storage/Plugin.pm | 6 --
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 5df074d..0b2745e 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -535,7 +535,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
my $name = $1;
return ('backup', "$sid:backup/$name");
} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
@@ -1392,7 +1392,7 @@ sub archive_info {
 my $info;
 
 my $volid = basename($archive);
-if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?$/)
 {
$info = decompressor_info($2, $3);
$info->{type} = $1;
 } else {
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index a6071eb..5f3e4c1 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -18,6 +18,8 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
+use constant COMPRESSOR_RE => 'gz|lzo';
+
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
 -p --sparse --numeric-owner --acls
@@ -435,7 +437,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo)))$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(?:\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\COMPRESSOR_RE}))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -939,7 +941,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
+   next if $fn !~ 
m!/([^/]+\.(tgz|(?:(?:tar|vma)(?:\.(${\COMPRESSOR_RE}))?)))$!;
 
my $format = $2;
$fn = $1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v5 1/2] restore: replace archive format/compression

2020-04-28 Thread Alwin Antreich
regex to reduce the code duplication, as archive_info and
decompressor_info provides the same information as well.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 36 ++--
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 37c7320..265d4f8 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5627,28 +5627,9 @@ sub tar_restore_cleanup {
 sub restore_file_archive {
 my ($archive, $vmid, $user, $opts) = @_;
 
-my $format = $opts->{format};
-my $comp;
-
-if ($archive =~ m/\.tgz$/ || $archive =~ m/\.tar\.gz$/) {
-   $format = 'tar' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.tar$/) {
-   $format = 'tar' if !$format;
-} elsif ($archive =~ m/.tar.lzo$/) {
-   $format = 'tar' if !$format;
-   $comp = 'lzop';
-} elsif ($archive =~ m/\.vma$/) {
-   $format = 'vma' if !$format;
-} elsif ($archive =~ m/\.vma\.gz$/) {
-   $format = 'vma' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.vma\.lzo$/) {
-   $format = 'vma' if !$format;
-   $comp = 'lzop';
-} else {
-   $format = 'vma' if !$format; # default
-}
+my $info = PVE::Storage::archive_info($archive);
+my $format = $opts->{format} // $info->{format};
+my $comp = $info->{compression};
 
 # try to detect archive format
 if ($format eq 'tar') {
@@ -6235,14 +6216,9 @@ sub restore_vma_archive {
 }
 
 if ($comp) {
-   my $cmd;
-   if ($comp eq 'gzip') {
-   $cmd = ['zcat', $readfrom];
-   } elsif ($comp eq 'lzop') {
-   $cmd = ['lzop', '-d', '-c', $readfrom];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
+   my $info = PVE::Storage::decompressor_info('vma', $comp);
+   my $cmd = $info->{decompressor};
+   push @$cmd, $readfrom;
$add_pipe->($cmd);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 05/12] Fix: backup: ctime was from stat not file name

2020-04-28 Thread Alwin Antreich
The vzdump file was passed with the full path to the regex. That regex
captures the time from the file name, to calculate the epoch.

As the regex didn't match, the ctime from stat was taken instead. This
resulted in the ctime shown when the file was changed, not when the
backup was made.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/Plugin.pm |  3 ++-
 test/list_volumes_test.pm | 16 
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 0925910..a6071eb 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -942,7 +942,8 @@ my $get_subdir_files = sub {
next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
 
my $format = $2;
-   $info = { volid => "$sid:backup/$1", format => $format };
+   $fn = $1;
+   $info = { volid => "$sid:backup/$fn", format => $format };
 
if ($fn =~ 
m!^vzdump\-(?:lxc|qemu)\-(?:[1-9][0-9]{2,8})\-(\d{4})_(\d{2})_(\d{2})\-(\d{2})_(\d{2})_(\d{2})\.${format}$!)
 {
my $epoch = timelocal($6, $5, $4, $3, $2-1, $1 - 1900);
diff --git a/test/list_volumes_test.pm b/test/list_volumes_test.pm
index a215617..941b903 100644
--- a/test/list_volumes_test.pm
+++ b/test/list_volumes_test.pm
@@ -129,7 +129,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585595500,
'format'  => 'vma.gz',
'size'=> DEFAULT_SIZE,
'vmid'=> '16110',
@@ -137,7 +137,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585595565,
'format'  => 'vma.lzo',
'size'=> DEFAULT_SIZE,
'vmid'=> '16110',
@@ -145,7 +145,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585595635,
'format'  => 'vma',
'size'=> DEFAULT_SIZE,
'vmid'=> '16110',
@@ -189,7 +189,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585597170,
'format'  => 'tar.lzo',
'size'=> DEFAULT_SIZE,
'vmid'=> '16112',
@@ -197,7 +197,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585597770,
'format'  => 'tar.gz',
'size'=> DEFAULT_SIZE,
'vmid'=> '16112',
@@ -205,7 +205,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585598370,
'format'  => 'tgz',
'size'=> DEFAULT_SIZE,
'vmid'=> '16112',
@@ -347,7 +347,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1580756263,
'format'  => 'tar.gz',
'size'=> DEFAULT_SIZE,
'vmid'=> '19253',
@@ -355,7 +355,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1548095359,
'format'  => 'tar',
'size'=> DEFAULT_SIZE,
'vmid'=> '19254',
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 11/12] test: get_subdir

2020-04-28 Thread Alwin Antreich
Co-Authored-by: Dominic Jaeger 
Signed-off-by: Alwin Antreich 
---
 test/get_subdir_test.pm  | 44 
 test/run_plugin_tests.pl |  1 +
 2 files changed, 45 insertions(+)
 create mode 100644 test/get_subdir_test.pm

diff --git a/test/get_subdir_test.pm b/test/get_subdir_test.pm
new file mode 100644
index 000..576c475
--- /dev/null
+++ b/test/get_subdir_test.pm
@@ -0,0 +1,44 @@
+package PVE::Storage::TestGetSubdir;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage::Plugin;
+use Test::More;
+
+my $scfg_with_path = { path => '/some/path' };
+my $vtype_subdirs = PVE::Storage::Plugin::get_vtype_subdirs();
+
+# each test is comprised of the following array keys:
+# [0] => storage config; positive with path key
+# [1] => storage type;  see $vtype_subdirs
+# [2] => expected return from get_subdir
+my $tests = [
+# failed matches
+[ $scfg_with_path, 'none', "unknown vtype 'none'\n" ],
+[ {}, 'iso', "storage definintion has no path\n" ],
+];
+
+# creates additional positive tests
+foreach my $type (keys %$vtype_subdirs) {
+my $path = "$scfg_with_path->{path}/$vtype_subdirs->{$type}";
+push @$tests, [ $scfg_with_path, $type, $path ];
+}
+
+plan tests => scalar @$tests;
+
+foreach my $tt (@$tests) {
+my ($scfg, $type, $expected) = @$tt;
+
+my $got;
+eval { $got = PVE::Storage::Plugin->get_subdir($scfg, $type) };
+$got = $@ if $@;
+
+is ($got, $expected, "get_subdir for $type") || diag(explain($got));
+}
+
+done_testing();
+
+1;
diff --git a/test/run_plugin_tests.pl b/test/run_plugin_tests.pl
index 770b407..9e427eb 100755
--- a/test/run_plugin_tests.pl
+++ b/test/run_plugin_tests.pl
@@ -11,6 +11,7 @@ my $res = $harness->runtests(
 "parse_volname_test.pm",
 "list_volumes_test.pm",
 "path_to_volume_id_test.pm",
+"get_subdir_test.pm",
 );
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 02/12] storage: replace build-in stat occurrences

2020-04-28 Thread Alwin Antreich
with File::stat::stat to minimize variable declarations. And allow to
mock this method in tests instead of the perl build-in stat.

Signed-off-by: Alwin Antreich 
---
 PVE/Diskmanage.pm |  9 +
 PVE/Storage/Plugin.pm | 34 ++
 2 files changed, 15 insertions(+), 28 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 13e7cd8..cac944d 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -6,6 +6,7 @@ use PVE::ProcFSTools;
 use Data::Dumper;
 use Cwd qw(abs_path);
 use Fcntl ':mode';
+use File::stat;
 use JSON;
 
 use PVE::Tools qw(extract_param run_command file_get_contents 
file_read_firstline dir_glob_regex dir_glob_foreach trim);
@@ -673,11 +674,11 @@ sub get_disks {
 sub get_partnum {
 my ($part_path) = @_;
 
-my ($mode, $rdev) = (stat($part_path))[2,6];
+my $st = stat($part_path);
 
-next if !$mode || !S_ISBLK($mode) || !$rdev;
-my $major = PVE::Tools::dev_t_major($rdev);
-my $minor = PVE::Tools::dev_t_minor($rdev);
+next if !$st->mode || !S_ISBLK($st->mode) || !$st->rdev;
+my $major = PVE::Tools::dev_t_major($st->rdev);
+my $minor = PVE::Tools::dev_t_minor($st->rdev);
 my $partnum_path = "/sys/dev/block/$major:$minor/";
 
 my $partnum;
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 4489a77..dba6eb9 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -7,6 +7,7 @@ use Fcntl ':mode';
 use File::chdir;
 use File::Path;
 use File::Basename;
+use File::stat qw();
 use Time::Local qw(timelocal);
 
 use PVE::Tools qw(run_command);
@@ -718,12 +719,10 @@ sub free_image {
 sub file_size_info {
 my ($filename, $timeout) = @_;
 
-my @fs = stat($filename);
-my $mode = $fs[2];
-my $ctime = $fs[10];
+my $st = File::stat::stat($filename);
 
-if (S_ISDIR($mode)) {
-   return wantarray ? (0, 'subvol', 0, undef, $ctime) : 1;
+if (S_ISDIR($st->mode)) {
+   return wantarray ? (0, 'subvol', 0, undef, $st->ctime) : 1;
 }
 
 my $json = '';
@@ -741,7 +740,7 @@ sub file_size_info {
 
 my ($size, $format, $used, $parent) = $info->@{qw(virtual-size format 
actual-size backing-filename)};
 
-return wantarray ? ($size, $format, $used, $parent, $ctime) : $size;
+return wantarray ? ($size, $format, $used, $parent, $st->ctime) : $size;
 }
 
 sub volume_size_info {
@@ -918,22 +917,9 @@ my $get_subdir_files = sub {
 
 foreach my $fn (<$path/*>) {
 
-   my ($dev,
-   $ino,
-   $mode,
-   $nlink,
-   $uid,
-   $gid,
-   $rdev,
-   $size,
-   $atime,
-   $mtime,
-   $ctime,
-   $blksize,
-   $blocks
-   ) = stat($fn);
-
-   next if S_ISDIR($mode);
+   my $st = File::stat::stat($fn);
+
+   next if S_ISDIR($st->mode);
 
my $info;
 
@@ -972,8 +958,8 @@ my $get_subdir_files = sub {
};
}
 
-   $info->{size} = $size;
-   $info->{ctime} //= $ctime;
+   $info->{size} = $st->size;
+   $info->{ctime} //= $st->ctime;
 
push @$res, $info;
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v5 2/2] Fix #2124: Add support for zstd

2020-04-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 265d4f8..fda1acb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7165,7 +7165,7 @@ sub complete_backup_archives {
 my $res = [];
 foreach my $id (keys %$data) {
foreach my $item (@{$data->{$id}}) {
-   next if $item->{format} !~ m/^vma\.(gz|lzo)$/;
+   next if $item->{format} !~ 
m/^vma\.(${\PVE::Storage::Plugin::COMPRESSOR_RE})$/;
push @$res, $item->{volid} if defined($item->{volid});
}
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 08/12] Fix: add missing snippets subdir

2020-04-28 Thread Alwin Antreich
since it is a valid content type and adapt the path_to_volume_id_test.
Also adds an extra check if all vtype_subdirs are returned.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm |  4 
 test/path_to_volume_id_test.pm | 26 +-
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 1ef5ed2..5df074d 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -512,6 +512,7 @@ sub path_to_volume_id {
my $tmpldir = $plugin->get_subdir($scfg, 'vztmpl');
my $backupdir = $plugin->get_subdir($scfg, 'backup');
my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
+   my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
 
if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
my $vmid = $1;
@@ -537,6 +538,9 @@ sub path_to_volume_id {
} elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
my $name = $1;
return ('backup', "$sid:backup/$name");
+   } elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
+   my $name = $1;
+   return ('snippets', "$sid:snippets/$name");
}
 }
 
diff --git a/test/path_to_volume_id_test.pm b/test/path_to_volume_id_test.pm
index 7d69869..e5e24c1 100644
--- a/test/path_to_volume_id_test.pm
+++ b/test/path_to_volume_id_test.pm
@@ -134,18 +134,24 @@ my @tests = (
'local:1234/subvol-1234-disk-0.subvol'
],
 },
-
-# no matches
 {
description => 'Snippets, yaml',
volname => "$storage_dir/snippets/userconfig.yaml",
-   expected => [''],
+   expected => [
+   'snippets',
+   'local:snippets/userconfig.yaml',
+   ],
 },
 {
description => 'Snippets, hookscript',
volname => "$storage_dir/snippets/hookscript.pl",
-   expected=> [''],
+   expected=> [
+   'snippets',
+   'local:snippets/hookscript.pl',
+   ],
 },
+
+# no matches
 {
description => 'CT template, tar.xz',
volname => 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.xz",
@@ -210,7 +216,10 @@ my @tests = (
 },
 );
 
-plan tests => scalar @tests;
+plan tests => scalar @tests + 1;
+
+my $seen_vtype;
+my $vtype_subdirs = { map { $_ => 1 } keys %{ 
PVE::Storage::Plugin::get_vtype_subdirs() } };
 
 foreach my $tt (@tests) {
 my $file = $tt->{volname};
@@ -232,8 +241,15 @@ foreach my $tt (@tests) {
 $got = $@ if $@;
 
 is_deeply($got, $expected, $description) || diag(explain($got));
+
+$seen_vtype->{@$expected[0]} = 1
+   if ( @$expected[0] ne '' && scalar @$expected > 1);
 }
 
+# to check if all $vtype_subdirs are defined in path_to_volume_id
+# or have a test
+is_deeply($seen_vtype, $vtype_subdirs, "vtype_subdir check");
+
 #cleanup
 # File::Temp unlinks tempdir on exit
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 01/12] storage: test: split archive format/compressor

2020-04-28 Thread Alwin Antreich
detection into separate functions so they are reusable and easier
modifiable. This patch also adds the test for archive_info.

Signed-off-by: Alwin Antreich 
---
 test/Makefile |   5 +-
 PVE/Storage.pm|  79 +---
 test/archive_info_test.pm | 125 ++
 test/run_plugin_tests.pl  |  12 
 4 files changed, 199 insertions(+), 22 deletions(-)
 create mode 100644 test/archive_info_test.pm
 create mode 100755 test/run_plugin_tests.pl

diff --git a/test/Makefile b/test/Makefile
index 833a597..c54b10f 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,6 @@
 all: test
 
-test: test_zfspoolplugin test_disklist test_bwlimit
+test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
 
 test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
@@ -10,3 +10,6 @@ test_disklist: run_disk_tests.pl
 
 test_bwlimit: run_bwlimit_tests.pl
./run_bwlimit_tests.pl
+
+test_plugin: run_plugin_tests.pl
+   ./run_plugin_tests.pl
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0848176..bdd6ebc 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1351,6 +1351,53 @@ sub foreach_volid {
 }
 }
 
+sub decompressor_info {
+my ($format, $comp) = @_;
+
+if ($format eq 'tgz' && !defined($comp)) {
+   ($format, $comp) = ('tar', 'gz');
+}
+
+my $decompressor = {
+   tar => {
+   gz => ['tar', '-z'],
+   lzo => ['tar', '--lzop'],
+   },
+   vma => {
+   gz => ['zcat'],
+   lzo => ['lzop', '-d', '-c'],
+   },
+};
+
+die "ERROR: archive format not defined\n"
+   if !defined($decompressor->{$format});
+
+my $decomp = $decompressor->{$format}->{$comp} if $comp;
+
+my $info = {
+   format => $format,
+   compression => $comp,
+   decompressor => $decomp,
+};
+
+return $info;
+}
+
+sub archive_info {
+my ($archive) = shift;
+my $info;
+
+my $volid = basename($archive);
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+   $info = decompressor_info($2, $3);
+   $info->{type} = $1;
+} else {
+   die "ERROR: couldn't determine format and compression type\n";
+}
+
+return $info;
+}
+
 sub extract_vzdump_config_tar {
 my ($archive, $conf_re) = @_;
 
@@ -1396,16 +1443,12 @@ sub extract_vzdump_config_vma {
 };
 
 
+my $info = archive_info($archive);
+$comp //= $info->{compression};
+my $decompressor = $info->{decompressor};
+
 if ($comp) {
-   my $uncomp;
-   if ($comp eq 'gz') {
-   $uncomp = ["zcat", $archive];
-   } elsif ($comp eq 'lzo') {
-   $uncomp = ["lzop", "-d", "-c", $archive];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
-   $cmd = [$uncomp, ["vma", "config", "-"]];
+   $cmd = [ [@$decompressor, $archive], ["vma", "config", "-"] ];
 
# in some cases, lzop/zcat exits with 1 when its stdout pipe is
# closed early by vma, detect this and ignore the exit code later
@@ -1455,20 +1498,14 @@ sub extract_vzdump_config {
 }
 
 my $archive = abs_filesystem_path($cfg, $volid);
+my $info = archive_info($archive);
+my $format = $info->{format};
+my $comp = $info->{compression};
+my $type = $info->{type};
 
-if ($volid =~ 
/vzdump-(lxc|openvz)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|(tar(\.(gz|lzo))?))$/)
 {
+if ($type eq 'lxc' || $type eq 'openvz') {
return extract_vzdump_config_tar($archive, 
qr!^(\./etc/vzdump/(pct|vps)\.conf)$!);
-} elsif ($volid =~ 
/vzdump-qemu-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
-   my $format;
-   my $comp;
-   if ($7 eq 'tgz') {
-   $format = 'tar';
-   $comp = 'gz';
-   } else {
-   $format = $9;
-   $comp = $11 if defined($11);
-   }
-
+} elsif ($type eq 'qemu') {
if ($format eq 'tar') {
return extract_vzdump_config_tar($archive, 
qr!\(\./qemu-server\.conf\)!);
} else {
diff --git a/test/archive_info_test.pm b/test/archive_info_test.pm
new file mode 100644
index 000..c9bb1b7
--- /dev/null
+++ b/test/archive_info_test.pm
@@ -0,0 +1,125 @@
+package PVE::Storage::TestArchiveInfo;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my $vmid = 16110;
+
+# an array of test cases, each test is comprised of the following keys:
+# description => to identify a single test
+# archive => the input filename for archive_info
+# expected=> the hash that archive_info returns
+#
+# most of them are created

[pve-devel] [PATCH storage v5 03/12] test: parse_volname

2020-04-28 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.
And to make sure that all vtype_subdirs are parsed.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/Plugin.pm  |   4 +
 test/parse_volname_test.pm | 253 +
 test/run_plugin_tests.pl   |   2 +-
 3 files changed, 258 insertions(+), 1 deletion(-)
 create mode 100644 test/parse_volname_test.pm

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index dba6eb9..0925910 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -457,6 +457,10 @@ my $vtype_subdirs = {
 snippets => 'snippets',
 };
 
+sub get_vtype_subdirs {
+return $vtype_subdirs;
+}
+
 sub get_subdir {
 my ($class, $scfg, $vtype) = @_;
 
diff --git a/test/parse_volname_test.pm b/test/parse_volname_test.pm
new file mode 100644
index 000..87c758c
--- /dev/null
+++ b/test/parse_volname_test.pm
@@ -0,0 +1,253 @@
+package PVE::Storage::TestParseVolname;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my $vmid = 1234;
+
+# an array of test cases, each test is comprised of the following keys:
+# description => to identify a single test
+# volname => the input for parse_volname
+# expected=> the array that parse_volname returns
+my $tests = [
+#
+# VM images
+#
+{
+   description => 'VM disk image, linked, qcow2, vm- as base-',
+   volname => 
"$vmid/vm-$vmid-disk-0.qcow2/$vmid/vm-$vmid-disk-0.qcow2",
+   expected=> [ 'images', "vm-$vmid-disk-0.qcow2", "$vmid", 
"vm-$vmid-disk-0.qcow2", "$vmid", undef, 'qcow2', ],
+},
+#
+# iso
+#
+{
+   description => 'ISO image, iso',
+   volname => 'iso/some-installation-disk.iso',
+   expected=> ['iso', 'some-installation-disk.iso'],
+},
+{
+   description => 'ISO image, img',
+   volname => 'iso/some-other-installation-disk.img',
+   expected=> ['iso', 'some-other-installation-disk.img'],
+},
+#
+# container templates
+#
+{
+   description => 'Container template tar.gz',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz',
+   expected=> ['vztmpl', 'debian-10.0-standard_10.0-1_amd64.tar.gz'],
+},
+{
+   description => 'Container template tar.xz',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
+   expected=> ['vztmpl', 'debian-10.0-standard_10.0-1_amd64.tar.xz'],
+},
+#
+# container rootdir
+#
+{
+   description => 'Container rootdir, sub directory',
+   volname => "rootdir/$vmid",
+   expected=> ['rootdir', "$vmid", "$vmid"],
+},
+{
+   description => 'Container rootdir, subvol',
+   volname => "$vmid/subvol-$vmid-disk-0.subvol",
+   expected=> [ 'images', "subvol-$vmid-disk-0.subvol", "$vmid", 
undef, undef, undef, 'subvol' ],
+},
+{
+   description => 'Backup archive, no virtualization type',
+   volname => "backup/vzdump-none-$vmid-2020_03_30-21_39_30.tar",
+   expected=> ['backup', "vzdump-none-$vmid-2020_03_30-21_39_30.tar"],
+},
+#
+# Snippets
+#
+{
+   description => 'Snippets, yaml',
+   volname => 'snippets/userconfig.yaml',
+   expected=> ['snippets', 'userconfig.yaml'],
+},
+{
+   description => 'Snippets, perl',
+   volname => 'snippets/hookscript.pl',
+   expected=> ['snippets', 'hookscript.pl'],
+},
+#
+# failed matches
+#
+{
+   description => "Failed match: VM disk image, base, raw",
+   volname => "/base-$vmid-disk-0.raw",
+   expected=> "unable to parse directory volume name 
'/base-$vmid-disk-0.raw'\n",
+},
+{
+   description => 'Failed match: ISO image, dvd',
+   volname => 'iso/yet-again-a-installation-disk.dvd',
+   expected=> "unable to parse directory volume name 
'iso/yet-again-a-installation-disk.dvd'\n",
+},
+{
+   description => 'Failed match: Container template, zip.gz',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.zip.gz',
+   expected=> "unable to parse directory volume name 
'vztmpl/debian-10.0-standard_10.0-1_amd64.zip.gz'\n",
+},
+{
+   description => 'Failed match: Container template, tar.bz2',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.bz2',
+   expected=> "unable to parse directory volume name 
'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.bz2'\n",
+},
+{
+   description => 'Failed match: Container rootdir, subvol',
+   volname 

[pve-devel] [PATCH container v5] Fix: #2124 add zstd

2020-04-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 52b0b48..39902a2 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -123,6 +123,7 @@ sub restore_tar_archive {
'.bz2' => '-j',
'.xz'  => '-J',
'.lzo'  => '--lzop',
+   '.zst'  => '--zstd',
);
if ($archive =~ /\.tar(\.[^.]+)?$/) {
if (defined($1)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v5] Fix #2124: Add support for zstd

2020-04-28 Thread Alwin Antreich
This patch adds the zstd to the compression selection for backup on the
GUI and add .zst to the backup file filter. Including zstd as package
install dependency.

Signed-off-by: Alwin Antreich 
---
 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index f3274196..80f4734c 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -609,6 +609,13 @@ sub compressor_info {
} else {
return ('gzip --rsyncable', 'gz');
}
+} elsif ($opt_compress eq 'zstd') {
+   my $zstd_threads = $opts->{zstd} // 1;
+   if ($zstd_threads == 0) {
+   my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
+   $zstd_threads = int(($cpuinfo->{cpus} + 1)/2);
+   }
+   return ("zstd --threads=${zstd_threads}", 'zst');
 } else {
die "internal error - unknown compression option '$opt_compress'";
 }
@@ -620,7 +627,7 @@ sub get_backup_file_list {
 my $bklist = [];
 foreach my $fn (<$dir/${bkname}-*>) {
next if $exclude_fn && $fn eq $exclude_fn;
-   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
 {
+   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
$fn = "$dir/$1"; # untaint
my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
push @$bklist, [$fn, $t];
@@ -928,7 +935,7 @@ sub exec_backup_task {
debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
unlink $d->[0];
my $logfn = $d->[0];
-   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
+   $logfn =~ 
s/\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?))$/\.log/;
unlink $logfn;
}
}
diff --git a/debian/control b/debian/control
index edb2833d..318b4f0e 100644
--- a/debian/control
+++ b/debian/control
@@ -60,6 +60,7 @@ Depends: apt-transport-https | apt (>= 1.5~),
  logrotate,
  lsb-base,
  lzop,
+ zstd,
  novnc-pve,
  pciutils,
  perl (>= 5.10.0-19),
diff --git a/www/manager6/form/CompressionSelector.js 
b/www/manager6/form/CompressionSelector.js
index 8938fc0e..842b7710 100644
--- a/www/manager6/form/CompressionSelector.js
+++ b/www/manager6/form/CompressionSelector.js
@@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
 comboItems: [
 ['0', Proxmox.Utils.noneText],
 ['lzo', 'LZO (' + gettext('fast') + ')'],
-['gzip', 'GZIP (' + gettext('good') + ')']
+['gzip', 'GZIP (' + gettext('good') + ')'],
+['zstd', 'ZSTD (' + gettext('better') + ')'],
 ]
 });
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v5 00/17] Fix: #2124 zstd

2020-04-28 Thread Alwin Antreich
Zstandard (zstd) [0] is a data compression algorithm, in addition to
gzip, lzo for our backup/restore. It can utilize multiple core CPUs. But
by default it has one compression and one writer thread.


Here some quick tests I made on my workstation. The files where placed
on a ram disk. And with dd filled from /dev/urandom and /dev/zero.

__Compression__
file size: 1073741824 bytes
  = urandom = = zero =
  995ms 1073766414328ms 98192zstd -k
  732ms 1073766414295ms 98192zstd -k -T4
  906ms 1073791036562ms 4894779  lzop -k
31992ms 1073915558   5594ms 1042087  gzip -k
30832ms 1074069541   5776ms 1171491  pigz -k -p 1
 7814ms 1074069541   1567ms 1171491  pigz -k -p 4

__Decompression__
file size: 1073741824 bytes
= urandom =   = zero =
   712ms  869ms  zstd -d
   685ms  872ms  zstd -k -d -T4
   841ms 2462ms  lzop -d
  5417ms 4754ms  gzip -k -d
  1248ms 3118ms  pigz -k -d -p 1
  1236ms 2379ms  pigz -k -d -p 4


And I used the same ramdisk to move a VM onto it and run a quick
backup/restore.

__vzdump backup__
INFO: transferred 34359 MB in 69 seconds (497 MB/s) zstd -T1
INFO: transferred 34359 MB in 37 seconds (928 MB/s) zstd -T4
INFO: transferred 34359 MB in 51 seconds (673 MB/s) lzo
INFO: transferred 34359 MB in 1083 seconds (31 MB/s) gzip
INFO: transferred 34359 MB in 241 seconds (142 MB/s) pigz -n 4

__qmrestore__
progress 100% (read 34359738368 bytes, duration 36 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) zstd -d -T4

progress 100% (read 34359738368 bytes, duration 38 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) lzo

progress 100% (read 34359738368 bytes, duration 175 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) pigz -n 4


v4 -> v5:
* fixup, use File::stat directly without overwriting CORE::stat,
  thanks Dietmar for pointing this out
  https://pve.proxmox.com/pipermail/pve-devel/2020-April/043134.html
* rebase to current master

v3 -> v4:
* fixed styling issues discovered by f.ebner (thanks)
* incorporated tests of d.jaeger into patches
* added fixes discovered by tests (f.ebner thanks)

v2 -> v3:
* split archive_info into decompressor_info and archive_info
* "compact" regex pattern is now a constant and used in
  multiple modules
* added tests for regex matching
* bug fix for ctime of backup files

v1 -> v2:
* factored out the decompressor info first, as Thomas suggested
* made the regex pattern of backup files more compact, easier to
  read (hopefully)
* less code changes for container restores

Thanks for any comment or suggestion in advance.

[0] https://facebook.github.io/zstd/

Alwin Antreich (17):
__pve-storage__
  storage: test: split archive format/compressor
  storage: replace build-in stat with File::stat
  test: parse_volname
  test: list_volumes
  Fix: backup: ctime was from stat not file name
  test: path_to_volume_id
  Fix: path_to_volume_id returned wrong content
  Fix: add missing snippets subdir
  backup: compact regex for backup file filter
  Fix: #2124 storage: add zstd support
  test: get_subdir
  test: filesystem_path

 test/Makefile  |   5 +-
 PVE/Diskmanage.pm  |   9 +-
 PVE/Storage.pm |  91 --
 PVE/Storage/Plugin.pm  |  47 ++-
 test/archive_info_test.pm  | 127 
 test/filesystem_path_test.pm   |  91 ++
 test/get_subdir_test.pm|  44 +++
 test/list_volumes_test.pm  | 537 +
 test/parse_volname_test.pm | 253 
 test/path_to_volume_id_test.pm | 274 +
 test/run_plugin_tests.pl   |  18 ++
 11 files changed, 1440 insertions(+), 56 deletions(-)
 create mode 100644 test/archive_info_test.pm
 create mode 100644 test/filesystem_path_test.pm
 create mode 100644 test/get_subdir_test.pm
 create mode 100644 test/list_volumes_test.pm
 create mode 100644 test/parse_volname_test.pm
 create mode 100644 test/path_to_volume_id_test.pm
 create mode 100755 test/run_plugin_tests.pl


__guest_common__
  Fix: #2124 add zstd support

 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)


__qemu-server__
  restore: replace archive format/compression
  Fix #2124: Add support for zstd

 PVE/QemuServer.pm | 38 +++---
 1 file changed, 7 insertions(+), 31 deletions(-)


__pve-container__
  Fix: #2124 add zstd

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)


__pve-manager__
  Fix #2124: Add support for zstd

 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common v5] Fix: #2124 add zstd support

2020-04-28 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..909e3af 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -88,7 +88,7 @@ my $confdesc = {
type => 'string',
description => "Compress dump file.",
optional => 1,
-   enum => ['0', '1', 'gzip', 'lzo'],
+   enum => ['0', '1', 'gzip', 'lzo', 'zstd'],
default => '0',
 },
 pigz=> {
@@ -98,6 +98,13 @@ my $confdesc = {
optional => 1,
default => 0,
 },
+zstd => {
+   type => "integer",
+   description => "Zstd threads. N=0 uses half of the available cores,".
+   " N>0 uses N as thread count.",
+   optional => 1,
+   default => 1,
+},
 quiet => {
type => 'boolean',
description => "Be quiet.",
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] Fix #2705: cephfs: mount fails with bad option

2020-04-24 Thread Alwin Antreich
dmesg: libceph: bad option at 'conf=/etc/pve/ceph.conf'

After the upgrade to PVE 6 with Ceph Luminous, the mount.ceph helper
doesn't understand the conf= option yet. And the CephFS mount with the
kernel client fails. After upgrading to Ceph Nautilus the option exists
in the mount.ceph helper.

Signed-off-by: Alwin Antreich 
---
 PVE/CephConfig.pm   | 29 +
 PVE/Storage/CephFSPlugin.pm |  6 +-
 PVE/Storage/RBDPlugin.pm| 31 +--
 3 files changed, 35 insertions(+), 31 deletions(-)

diff --git a/PVE/CephConfig.pm b/PVE/CephConfig.pm
index 685bdae..1e95a90 100644
--- a/PVE/CephConfig.pm
+++ b/PVE/CephConfig.pm
@@ -255,4 +255,33 @@ sub ceph_remove_keyfile {
 }
 }
 
+my $ceph_version_parser = sub {
+my $ceph_version = shift;
+# FIXME this is the same as pve-manager PVE::Ceph::Tools get_local_version
+if ($ceph_version =~ 
/^ceph.*\s(\d+(?:\.\d+)+(?:-pve\d+)?)\s+(?:\(([a-zA-Z0-9]+)\))?/) {
+   my ($version, $buildcommit) = ($1, $2);
+   my $subversions = [ split(/\.|-/, $version) ];
+
+   return ($subversions, $version, $buildcommit);
+}
+warn "Could not parse Ceph version: '$ceph_version'\n";
+};
+
+sub ceph_version {
+my ($cache) = @_;
+
+my $version_string = $cache;
+if (!defined($version_string)) {
+   run_command('ceph --version', outfunc => sub {
+   $version_string = shift;
+   });
+}
+return undef if !defined($version_string);
+# subversion is an array ref. with the version parts from major to minor
+# version is the filtered version string
+my ($subversions, $version) = $ceph_version_parser->($version_string);
+
+return wantarray ? ($subversions, $version) : $version;
+}
+
 1;
diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
index 4aa9e96..54689ae 100644
--- a/PVE/Storage/CephFSPlugin.pm
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -80,6 +80,7 @@ EOF
 sub cephfs_mount {
 my ($scfg, $storeid) = @_;
 
+my ($subversions) = PVE::CephConfig::ceph_version();
 my $mountpoint = $scfg->{path};
 my $subdir = $scfg->{subdir} // '/';
 
@@ -98,7 +99,10 @@ sub cephfs_mount {
 } else {
push @opts, "name=$cmd_option->{userid}";
push @opts, "secretfile=$secretfile" if defined($secretfile);
-   push @opts, "conf=$configfile" if defined($configfile);
+   
+   # FIXME: remove subversion check in PVE 7.0, not needed for >= Nautilus
+   # Luminous doesn't know the conf option
+   push @opts, "conf=$configfile" if defined($configfile) && 
@$subversions[0] > 12;
 }
 
 push @opts, $scfg->{options} if $scfg->{options};
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 0a33ec0..7371721 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -77,7 +77,7 @@ my $librados_connect = sub {
 my $krbd_feature_update = sub {
 my ($scfg, $storeid, $name) = @_;
 
-my ($versionparts) = ceph_version();
+my ($versionparts) = PVE::CephConfig::ceph_version();
 return 1 if $versionparts->[0] < 10;
 
 my (@disable, @enable);
@@ -123,35 +123,6 @@ my $krbd_feature_update = sub {
 }
 };
 
-my $ceph_version_parser = sub {
-my $ceph_version = shift;
-# FIXME this is the same as pve-manager PVE::Ceph::Tools get_local_version
-if ($ceph_version =~ 
/^ceph.*\s(\d+(?:\.\d+)+(?:-pve\d+)?)\s+(?:\(([a-zA-Z0-9]+)\))?/) {
-   my ($version, $buildcommit) = ($1, $2);
-   my $subversions = [ split(/\.|-/, $version) ];
-
-   return ($subversions, $version, $buildcommit);
-}
-warn "Could not parse Ceph version: '$ceph_version'\n";
-};
-
-sub ceph_version {
-my ($cache) = @_;
-
-my $version_string = $cache;
-if (!defined($version_string)) {
-   run_command('ceph --version', outfunc => sub {
-   $version_string = shift;
-   });
-}
-return undef if !defined($version_string);
-# subversion is an array ref. with the version parts from major to minor
-# version is the filtered version string
-my ($subversions, $version) = $ceph_version_parser->($version_string);
-
-return wantarray ? ($subversions, $version) : $version;
-}
-
 sub run_rbd_command {
 my ($cmd, %args) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage v4 00/12] Fix: #2124 zstd

2020-04-23 Thread Alwin Antreich
On Thu, Apr 23, 2020 at 12:35:34PM +0200, Dominic Jäger wrote:
> Thank you for merging the test files!
No problem. I hope I didn't miss any. :)

> 
> Love the dropdown to set a compression:
>   GZIP (good)
>   ZSTD (better)
> 
> Tests work and creating and restoring backups in the GUI with the new option, 
> too.
Thanks for testing.

> 
> Tested-by: Dominic Jäger 
> 
> On Wed, Apr 22, 2020 at 04:57:51PM +0200, Alwin Antreich wrote:
> > Zstandard (zstd) [0] is a data compression algorithm, in addition to
> > gzip, lzo for our backup/restore. It can utilize multiple core CPUs. But
> > by default it has one compression and one writer thread.
> > 
> > 
> > [0] https://facebook.github.io/zstd/
> > 
> > Alwin Antreich (12):
> > __pve-storage__
> >   storage: test: split archive format/compressor
> >   storage: replace build-in stat with File::stat
> >   test: parse_volname
> >   test: list_volumes
> >   Fix: backup: ctime was from stat not file name
> >   test: path_to_volume_id
> >   Fix: path_to_volume_id returned wrong content
> >   Fix: add missing snippets subdir
> >   backup: compact regex for backup file filter
> >   Fix: #2124 storage: add zstd support
> >   test: get_subdir
> >   test: filesystem_path
> > 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage v4 02/12] storage: replace build-in stat with File::stat

2020-04-23 Thread Alwin Antreich
On Thu, Apr 23, 2020 at 05:52:29AM +0200, Dietmar Maurer wrote:
> > On April 22, 2020 6:00 PM Alwin Antreich  wrote:
> > 
> >  
> > On Wed, Apr 22, 2020 at 05:35:05PM +0200, Dietmar Maurer wrote:
> > > AFAIK this can have ugly side effects ...
> > Okay, I was not aware of any know side effects.
> > 
> > I took the File::stat, since we use it already in pve-cluster,
> > qemu-server, pve-common, ... . And a off-list discussion with Thomas and
> > Fabian G.
> > 
> > If there is a better solution, I am happy to work on it.
> 
> 
> # grep -r "use File::stat" /usr/share/perl5/PVE/
> /usr/share/perl5/PVE/QemuServer/Helpers.pm:use File::stat;
> /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm:use File::stat;
> /usr/share/perl5/PVE/APIServer/AnyEvent.pm:use File::stat qw();
> /usr/share/perl5/PVE/AccessControl.pm:use File::stat;
> /usr/share/perl5/PVE/Cluster.pm:use File::stat qw();
> /usr/share/perl5/PVE/LXC/Setup/Base.pm:use File::stat;
> /usr/share/perl5/PVE/QemuServer.pm:use File::stat;
> /usr/share/perl5/PVE/INotify.pm:use File::stat;
> /usr/share/perl5/PVE/API2/APT.pm:use File::stat ();
> 
> So I would use:
> 
> use File::stat qw();
> 
> to avoid override the core stat() and lstat() functions.
Thank you. I will do that and add it to a v5.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage v4 02/12] storage: replace build-in stat with File::stat

2020-04-22 Thread Alwin Antreich
On Wed, Apr 22, 2020 at 05:35:05PM +0200, Dietmar Maurer wrote:
> AFAIK this can have ugly side effects ...
Okay, I was not aware of any know side effects.

I took the File::stat, since we use it already in pve-cluster,
qemu-server, pve-common, ... . And a off-list discussion with Thomas and
Fabian G.

If there is a better solution, I am happy to work on it.

> 
> > On April 22, 2020 4:57 PM Alwin Antreich  wrote:
> > 
> >  
> > to minimize variable declarations. And allow to mock this method in
> > tests instead of the perl build-in stat.
> > 
> > Signed-off-by: Alwin Antreich 
> > ---
> >  PVE/Diskmanage.pm |  9 +
> >  PVE/Storage/Plugin.pm | 34 ++
> >  2 files changed, 15 insertions(+), 28 deletions(-)
> > 
> > diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
> > index 13e7cd8..cac944d 100644
> > --- a/PVE/Diskmanage.pm
> > +++ b/PVE/Diskmanage.pm
> > @@ -6,6 +6,7 @@ use PVE::ProcFSTools;
> >  use Data::Dumper;
> >  use Cwd qw(abs_path);
> >  use Fcntl ':mode';
> > +use File::stat;
> >  use JSON;
> >  
> >  use PVE::Tools qw(extract_param run_command file_get_contents 
> > file_read_firstline dir_glob_regex dir_glob_foreach trim);
> > @@ -673,11 +674,11 @@ sub get_disks {
> >  sub get_partnum {
> >  my ($part_path) = @_;
> >  
> > -my ($mode, $rdev) = (stat($part_path))[2,6];
> > +my $st = stat($part_path);
> >  
> > -next if !$mode || !S_ISBLK($mode) || !$rdev;
> > -my $major = PVE::Tools::dev_t_major($rdev);
> > -my $minor = PVE::Tools::dev_t_minor($rdev);
> > +next if !$st->mode || !S_ISBLK($st->mode) || !$st->rdev;
> > +my $major = PVE::Tools::dev_t_major($st->rdev);
> > +my $minor = PVE::Tools::dev_t_minor($st->rdev);
> >  my $partnum_path = "/sys/dev/block/$major:$minor/";
> >  
> >  my $partnum;
> > diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> > index 4489a77..d2dfad6 100644
> > --- a/PVE/Storage/Plugin.pm
> > +++ b/PVE/Storage/Plugin.pm
> > @@ -7,6 +7,7 @@ use Fcntl ':mode';
> >  use File::chdir;
> >  use File::Path;
> >  use File::Basename;
> > +use File::stat;
> >  use Time::Local qw(timelocal);
> >  
> >  use PVE::Tools qw(run_command);
> > @@ -718,12 +719,10 @@ sub free_image {
> >  sub file_size_info {
> >  my ($filename, $timeout) = @_;
> >  
> > -my @fs = stat($filename);
> > -my $mode = $fs[2];
> > -my $ctime = $fs[10];
> > +my $st = stat($filename);
> >  
> > -if (S_ISDIR($mode)) {
> > -   return wantarray ? (0, 'subvol', 0, undef, $ctime) : 1;
> > +if (S_ISDIR($st->mode)) {
> > +   return wantarray ? (0, 'subvol', 0, undef, $st->ctime) : 1;
> >  }
> >  
> >  my $json = '';
> > @@ -741,7 +740,7 @@ sub file_size_info {
> >  
> >  my ($size, $format, $used, $parent) = $info->@{qw(virtual-size format 
> > actual-size backing-filename)};
> >  
> > -return wantarray ? ($size, $format, $used, $parent, $ctime) : $size;
> > +return wantarray ? ($size, $format, $used, $parent, $st->ctime) : 
> > $size;
> >  }
> >  
> >  sub volume_size_info {
> > @@ -918,22 +917,9 @@ my $get_subdir_files = sub {
> >  
> >  foreach my $fn (<$path/*>) {
> >  
> > -   my ($dev,
> > -   $ino,
> > -   $mode,
> > -   $nlink,
> > -   $uid,
> > -   $gid,
> > -   $rdev,
> > -   $size,
> > -   $atime,
> > -   $mtime,
> > -   $ctime,
> > -   $blksize,
> > -   $blocks
> > -   ) = stat($fn);
> > -
> > -   next if S_ISDIR($mode);
> > +   my $st = stat($fn);
> > +
> > +   next if S_ISDIR($st->mode);
> >  
> > my $info;
> >  
> > @@ -972,8 +958,8 @@ my $get_subdir_files = sub {
> > };
> > }
> >  
> > -   $info->{size} = $size;
> > -   $info->{ctime} //= $ctime;
> > +   $info->{size} = $st->size;
> > +   $info->{ctime} //= $st->ctime;
> >  
> > push @$res, $info;
> >  }
> > -- 
> > 2.20.1
> > 
> > 
> > ___
> > pve-devel mailing list
> > pve-devel@pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v4] Fix: #2124 add zstd

2020-04-22 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 52b0b48..39902a2 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -123,6 +123,7 @@ sub restore_tar_archive {
'.bz2' => '-j',
'.xz'  => '-J',
'.lzo'  => '--lzop',
+   '.zst'  => '--zstd',
);
if ($archive =~ /\.tar(\.[^.]+)?$/) {
if (defined($1)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 00/12] Fix: #2124 zstd

2020-04-22 Thread Alwin Antreich
Zstandard (zstd) [0] is a data compression algorithm, in addition to
gzip, lzo for our backup/restore. It can utilize multiple core CPUs. But
by default it has one compression and one writer thread.


Here some quick tests I made on my workstation. The files where placed
on a ram disk. And with dd filled from /dev/urandom and /dev/zero.

__Compression__
file size: 1073741824 bytes
  = urandom = = zero =
  995ms 1073766414328ms 98192zstd -k
  732ms 1073766414295ms 98192zstd -k -T4
  906ms 1073791036562ms 4894779  lzop -k
31992ms 1073915558   5594ms 1042087  gzip -k
30832ms 1074069541   5776ms 1171491  pigz -k -p 1
 7814ms 1074069541   1567ms 1171491  pigz -k -p 4

__Decompression__
file size: 1073741824 bytes
= urandom =   = zero =
   712ms  869ms  zstd -d
   685ms  872ms  zstd -k -d -T4
   841ms 2462ms  lzop -d
  5417ms 4754ms  gzip -k -d
  1248ms 3118ms  pigz -k -d -p 1
  1236ms 2379ms  pigz -k -d -p 4


And I used the same ramdisk to move a VM onto it and run a quick
backup/restore.

__vzdump backup__
INFO: transferred 34359 MB in 69 seconds (497 MB/s) zstd -T1
INFO: transferred 34359 MB in 37 seconds (928 MB/s) zstd -T4
INFO: transferred 34359 MB in 51 seconds (673 MB/s) lzo
INFO: transferred 34359 MB in 1083 seconds (31 MB/s) gzip
INFO: transferred 34359 MB in 241 seconds (142 MB/s) pigz -n 4

__qmrestore__
progress 100% (read 34359738368 bytes, duration 36 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) zstd -d -T4

progress 100% (read 34359738368 bytes, duration 38 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) lzo

progress 100% (read 34359738368 bytes, duration 175 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) pigz -n 4


v3 -> v4:
* fixed styling issues discovered by f.ebner (thanks)
* incorporated tests of d.jaeger into patches
* added fixes discovered by tests (f.ebner thanks)

v2 -> v3:
* split archive_info into decompressor_info and archive_info
* "compact" regex pattern is now a constant and used in
  multiple modules
* added tests for regex matching
* bug fix for ctime of backup files

v1 -> v2:
* factored out the decompressor info first, as Thomas suggested
* made the regex pattern of backup files more compact, easier to
  read (hopefully)
* less code changes for container restores

Thanks for any comment or suggestion in advance.

[0] https://facebook.github.io/zstd/

Alwin Antreich (12):
__pve-storage__
  storage: test: split archive format/compressor
  storage: replace build-in stat with File::stat
  test: parse_volname
  test: list_volumes
  Fix: backup: ctime was from stat not file name
  test: path_to_volume_id
  Fix: path_to_volume_id returned wrong content
  Fix: add missing snippets subdir
  backup: compact regex for backup file filter
  Fix: #2124 storage: add zstd support
  test: get_subdir
  test: filesystem_path

 test/Makefile  |   5 +-
 PVE/Diskmanage.pm  |   9 +-
 PVE/Storage.pm |  91 --
 PVE/Storage/Plugin.pm  |  47 ++-
 test/archive_info_test.pm  | 127 
 test/filesystem_path_test.pm   |  91 ++
 test/get_subdir_test.pm|  44 +++
 test/list_volumes_test.pm  | 537 +
 test/parse_volname_test.pm | 253 
 test/path_to_volume_id_test.pm | 274 +
 test/run_plugin_tests.pl   |  18 ++
 11 files changed, 1440 insertions(+), 56 deletions(-)
 create mode 100644 test/archive_info_test.pm
 create mode 100644 test/filesystem_path_test.pm
 create mode 100644 test/get_subdir_test.pm
 create mode 100644 test/list_volumes_test.pm
 create mode 100644 test/parse_volname_test.pm
 create mode 100644 test/path_to_volume_id_test.pm
 create mode 100755 test/run_plugin_tests.pl


__qemu-server__
  restore: replace archive format/compression
  Fix #2124: Add support for zstd

 PVE/QemuServer.pm | 38 +++---
 1 file changed, 7 insertions(+), 31 deletions(-)


__pve-container__
  Fix: #2124 add zstd

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)


__guest_common__
  Fix: #2124 add zstd support

 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)


__pve-manager__
  Fix #2124: Add support for zstd

 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 05/12] Fix: backup: ctime was from stat not file name

2020-04-22 Thread Alwin Antreich
The vzdump file was passed with the full path to the regex. That regex
captures the time from the file name, to calculate the epoch.

As the regex didn't match, the ctime from stat was taken instead. This
resulted in the ctime shown when the file was changed, not when the
backup was made.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/Plugin.pm |  3 ++-
 test/list_volumes_test.pm | 16 
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 71a83f7..9dde46e 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -942,7 +942,8 @@ my $get_subdir_files = sub {
next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
 
my $format = $2;
-   $info = { volid => "$sid:backup/$1", format => $format };
+   $fn = $1;
+   $info = { volid => "$sid:backup/$fn", format => $format };
 
if ($fn =~ 
m!^vzdump\-(?:lxc|qemu)\-(?:[1-9][0-9]{2,8})\-(\d{4})_(\d{2})_(\d{2})\-(\d{2})_(\d{2})_(\d{2})\.${format}$!)
 {
my $epoch = timelocal($6, $5, $4, $3, $2-1, $1 - 1900);
diff --git a/test/list_volumes_test.pm b/test/list_volumes_test.pm
index c1428f2..ac0503e 100644
--- a/test/list_volumes_test.pm
+++ b/test/list_volumes_test.pm
@@ -129,7 +129,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585595500,
'format'  => 'vma.gz',
'size'=> DEFAULT_SIZE,
'vmid'=> '16110',
@@ -137,7 +137,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585595565,
'format'  => 'vma.lzo',
'size'=> DEFAULT_SIZE,
'vmid'=> '16110',
@@ -145,7 +145,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585595635,
'format'  => 'vma',
'size'=> DEFAULT_SIZE,
'vmid'=> '16110',
@@ -189,7 +189,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585597170,
'format'  => 'tar.lzo',
'size'=> DEFAULT_SIZE,
'vmid'=> '16112',
@@ -197,7 +197,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585597770,
'format'  => 'tar.gz',
'size'=> DEFAULT_SIZE,
'vmid'=> '16112',
@@ -205,7 +205,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1585598370,
'format'  => 'tgz',
'size'=> DEFAULT_SIZE,
'vmid'=> '16112',
@@ -347,7 +347,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1580756263,
'format'  => 'tar.gz',
'size'=> DEFAULT_SIZE,
'vmid'=> '19253',
@@ -355,7 +355,7 @@ my @tests = (
},
{
'content' => 'backup',
-   'ctime'   => DEFAULT_CTIME,
+   'ctime'   => 1548095359,
'format'  => 'tar',
'size'=> DEFAULT_SIZE,
'vmid'=> '19254',
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 06/12] test: path_to_volume_id

2020-04-22 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/path_to_volume_id_test.pm | 242 +
 test/run_plugin_tests.pl   |   2 +-
 2 files changed, 243 insertions(+), 1 deletion(-)
 create mode 100644 test/path_to_volume_id_test.pm

diff --git a/test/path_to_volume_id_test.pm b/test/path_to_volume_id_test.pm
new file mode 100644
index 000..744c3ee
--- /dev/null
+++ b/test/path_to_volume_id_test.pm
@@ -0,0 +1,242 @@
+package PVE::Storage::TestPathToVolumeId;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+
+use Test::More;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+use File::Temp;
+
+my $storage_dir = File::Temp->newdir();
+my $scfg = {
+'digest' => 'd29306346b8b25b90a4a96165f1e8f52d1af1eda',
+'ids'=> {
+   'local' => {
+   'shared'   => 0,
+   'path' => "$storage_dir",
+   'type' => 'dir',
+   'maxfiles' => 0,
+   'content'  => {
+   'snippets' => 1,
+   'rootdir'  => 1,
+   'images'   => 1,
+   'iso'  => 1,
+   'backup'   => 1,
+   'vztmpl'   => 1,
+   },
+   },
+},
+'order' => {
+   'local' => 1,
+},
+};
+
+# the tests array consists of hashes with the following keys:
+# description => to identify the test case
+# volname => to create the test file
+# expected=> the result that path_to_volume_id should return
+my @tests = (
+{
+   description => 'Image, qcow2',
+   volname => "$storage_dir/images/16110/vm-16110-disk-0.qcow2",
+   expected=> [
+   'images',
+   'local:16110/vm-16110-disk-0.qcow2',
+   ],
+},
+{
+   description => 'Image, raw',
+   volname => "$storage_dir/images/16112/vm-16112-disk-0.raw",
+   expected=> [
+   'images',
+   'local:16112/vm-16112-disk-0.raw',
+   ],
+},
+{
+   description => 'Image template, qcow2',
+   volname => "$storage_dir/images/9004/base-9004-disk-0.qcow2",
+   expected=> [
+   'images',
+   'local:9004/base-9004-disk-0.qcow2',
+   ],
+},
+
+{
+   description => 'Backup, vma.gz',
+   volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
+   ],
+},
+{
+   description => 'Backup, vma.lzo',
+   volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo',
+   ],
+},
+{
+   description => 'Backup, vma',
+   volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
+   ],
+},
+{
+   description => 'Backup, tar.lzo',
+   volname => 
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",
+   expected=> [
+   'iso',
+   'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo',
+   ],
+},
+
+{
+   description => 'ISO file',
+   volname => 
"$storage_dir/template/iso/yet-again-a-installation-disk.iso",
+   expected=> [
+   'iso',
+   'local:iso/yet-again-a-installation-disk.iso',
+   ],
+},
+{
+   description => 'CT template, tar.gz',
+   volname => 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz",
+   expected=> [
+   'vztmpl',
+   'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz',
+   ],
+},
+
+{
+   description => 'Rootdir',
+   volname => "$storage_dir/private/1234/", # fileparse needs / at the 
end
+   expected=> [
+   'rootdir',
+   'local:rootdir/1234',
+   ],
+},
+{
+   description => 'Rootdir, folder subvol',
+   volname => "$storage_dir/images/1234/subvol-1234-disk-0.subvol/", # 
fileparse needs / at the end
+   expected=> [
+   'images',
+   'local:1234/subvol-1234-disk-0.subvol'
+   ],
+},
+
+# no matches
+{
+   description => 'Snippets, yaml',
+   volname => "$storage_dir/snippets/userconfig.yaml",
+   expected => [''],
+},
+{
+   description => 'Snippets, hookscript',
+   volname => "

[pve-devel] [PATCH storage v4 12/12] test: filesystem_path

2020-04-22 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 test/filesystem_path_test.pm | 91 
 test/run_plugin_tests.pl |  1 +
 2 files changed, 92 insertions(+)
 create mode 100644 test/filesystem_path_test.pm

diff --git a/test/filesystem_path_test.pm b/test/filesystem_path_test.pm
new file mode 100644
index 000..c1b6d90
--- /dev/null
+++ b/test/filesystem_path_test.pm
@@ -0,0 +1,91 @@
+package PVE::Storage::TestFilesystemPath;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my $path = '/some/path';
+
+# each array entry is a test that consists of the following keys:
+# volname  => image name that is passed to parse_volname
+# snapname => to test the die condition
+# expected => the array of return values; or the die message
+my $tests = [
+{
+   volname  => '1234/vm-1234-disk-0.raw',
+   snapname => undef,
+   expected => [
+   "$path/images/1234/vm-1234-disk-0.raw",
+   '1234',
+   'images'
+   ],
+},
+{
+   volname  => '1234/vm-1234-disk-0.raw',
+   snapname => 'my_snap',
+   expected => "can't snapshot this image format\n"
+},
+{
+   volname  => '1234/vm-1234-disk-0.qcow2',
+   snapname => undef,
+   expected => [
+   "$path/images/1234/vm-1234-disk-0.qcow2",
+   '1234',
+   'images'
+   ],
+},
+{
+   volname  => '1234/vm-1234-disk-0.qcow2',
+   snapname => 'my_snap',
+   expected => [
+   "$path/images/1234/vm-1234-disk-0.qcow2",
+   '1234',
+   'images'
+   ],
+},
+{
+   volname  => 'iso/my-awesome-proxmox.iso',
+   snapname => undef,
+   expected => [
+   "$path/template/iso/my-awesome-proxmox.iso",
+   undef,
+   'iso'
+   ],
+},
+{
+   volname  => "backup/vzdump-qemu-1234-2020_03_30-21_12_40.vma",
+   snapname => undef,
+   expected => [
+   "$path/dump/vzdump-qemu-1234-2020_03_30-21_12_40.vma",
+   1234,
+   'backup'
+   ],
+},
+];
+
+plan tests => scalar @$tests;
+
+foreach my $tt (@$tests) {
+my $volname = $tt->{volname};
+my $snapname = $tt->{snapname};
+my $expected = $tt->{expected};
+my $scfg = { path => $path };
+my $got;
+
+eval {
+   $got = [ PVE::Storage::Plugin->filesystem_path($scfg, $volname, 
$snapname) ];
+};
+$got = $@ if $@;
+
+is_deeply($got, $expected, "wantarray: filesystem_path for $volname")
+|| diag(explain($got));
+
+}
+
+done_testing();
+
+1;
diff --git a/test/run_plugin_tests.pl b/test/run_plugin_tests.pl
index 9e427eb..e29fc88 100755
--- a/test/run_plugin_tests.pl
+++ b/test/run_plugin_tests.pl
@@ -12,6 +12,7 @@ my $res = $harness->runtests(
 "list_volumes_test.pm",
 "path_to_volume_id_test.pm",
 "get_subdir_test.pm",
+"filesystem_path_test.pm",
 );
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 11/12] test: get_subdir

2020-04-22 Thread Alwin Antreich
Co-Authored-by: Dominic Jaeger 
Signed-off-by: Alwin Antreich 
---
 test/get_subdir_test.pm  | 44 
 test/run_plugin_tests.pl |  1 +
 2 files changed, 45 insertions(+)
 create mode 100644 test/get_subdir_test.pm

diff --git a/test/get_subdir_test.pm b/test/get_subdir_test.pm
new file mode 100644
index 000..576c475
--- /dev/null
+++ b/test/get_subdir_test.pm
@@ -0,0 +1,44 @@
+package PVE::Storage::TestGetSubdir;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage::Plugin;
+use Test::More;
+
+my $scfg_with_path = { path => '/some/path' };
+my $vtype_subdirs = PVE::Storage::Plugin::get_vtype_subdirs();
+
+# each test is comprised of the following array keys:
+# [0] => storage config; positive with path key
+# [1] => storage type;  see $vtype_subdirs
+# [2] => expected return from get_subdir
+my $tests = [
+# failed matches
+[ $scfg_with_path, 'none', "unknown vtype 'none'\n" ],
+[ {}, 'iso', "storage definintion has no path\n" ],
+];
+
+# creates additional positive tests
+foreach my $type (keys %$vtype_subdirs) {
+my $path = "$scfg_with_path->{path}/$vtype_subdirs->{$type}";
+push @$tests, [ $scfg_with_path, $type, $path ];
+}
+
+plan tests => scalar @$tests;
+
+foreach my $tt (@$tests) {
+my ($scfg, $type, $expected) = @$tt;
+
+my $got;
+eval { $got = PVE::Storage::Plugin->get_subdir($scfg, $type) };
+$got = $@ if $@;
+
+is ($got, $expected, "get_subdir for $type") || diag(explain($got));
+}
+
+done_testing();
+
+1;
diff --git a/test/run_plugin_tests.pl b/test/run_plugin_tests.pl
index 770b407..9e427eb 100755
--- a/test/run_plugin_tests.pl
+++ b/test/run_plugin_tests.pl
@@ -11,6 +11,7 @@ my $res = $harness->runtests(
 "parse_volname_test.pm",
 "list_volumes_test.pm",
 "path_to_volume_id_test.pm",
+"get_subdir_test.pm",
 );
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 04/12] test: list_volumes

2020-04-22 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Co-Authored-by: Dominic Jaeger 
Signed-off-by: Alwin Antreich 
---
 test/list_volumes_test.pm | 519 ++
 test/run_plugin_tests.pl  |   6 +-
 2 files changed, 524 insertions(+), 1 deletion(-)
 create mode 100644 test/list_volumes_test.pm

diff --git a/test/list_volumes_test.pm b/test/list_volumes_test.pm
new file mode 100644
index 000..c1428f2
--- /dev/null
+++ b/test/list_volumes_test.pm
@@ -0,0 +1,519 @@
+package PVE::Storage::TestListVolumes;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use PVE::Cluster;
+use PVE::Tools qw(run_command);
+
+use Test::More;
+use Test::MockModule;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+use File::stat;
+use File::Temp;
+use Storable qw(dclone);
+
+use constant DEFAULT_SIZE => 131072; # 128 kiB
+use constant DEFAULT_USED => 262144; # 256 kiB
+use constant DEFAULT_CTIME => 1234567890;
+
+# get_vmlist() return values
+my $mocked_vmlist = {
+'version' => 1,
+'ids' => {
+   '16110' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 4,
+   },
+   '16112' => {
+   'node'=> 'x42',
+   'type'=> 'lxc',
+   'version' => 7,
+   },
+   '16114' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 2,
+   },
+   '16113' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 5,
+   },
+   '16115' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 1,
+   },
+   '9004' => {
+   'node'=> 'x42',
+   'type'=> 'qemu',
+   'version' => 6,
+   }
+}
+};
+
+my $storage_dir = File::Temp->newdir();
+my $scfg = {
+'type' => 'dir',
+'maxfiles' => 0,
+'path' => $storage_dir,
+'shared'   => 0,
+'content'  => {
+   'iso'  => 1,
+   'rootdir'  => 1,
+   'vztmpl'   => 1,
+   'images'   => 1,
+   'snippets' => 1,
+   'backup'   => 1,
+},
+};
+
+# The test cases are comprised of an arry of hashes with the following keys:
+# description => displayed on error by Test::More
+# vmid=> used for image matches by list_volume
+# files   => array of files for qemu-img to create
+# expected=> returned result hash
+#(content, ctime, format, parent, size, used, vimd, volid)
+my @tests = (
+{
+   description => 'VMID: 16110, VM, qcow2, backup, snippets',
+   vmid => '16110',
+   files => [
+   "$storage_dir/images/16110/vm-16110-disk-0.qcow2",
+   "$storage_dir/images/16110/vm-16110-disk-1.raw",
+   "$storage_dir/images/16110/vm-16110-disk-2.vmdk",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   "$storage_dir/snippets/userconfig.yaml",
+   "$storage_dir/snippets/hookscript.pl",
+   ],
+   expected => [
+   {
+   'content' => 'images',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'qcow2',
+   'parent'  => undef,
+   'size'=> DEFAULT_SIZE,
+   'used'=> DEFAULT_USED,
+   'vmid'=> '16110',
+   'volid'   => 'local:16110/vm-16110-disk-0.qcow2',
+   },
+   {
+   'content' => 'images',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'raw',
+   'parent'  => undef,
+   'size'=> DEFAULT_SIZE,
+   'used'=> DEFAULT_USED,
+   'vmid'=> '16110',
+   'volid'   => 'local:16110/vm-16110-disk-1.raw',
+   },
+   {
+   'content' => 'images',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'vmdk',
+   'parent'  => undef,
+   'size'=> DEFAULT_SIZE,
+   'used'=> DEFAULT_USED,
+   'vmid'=> '16110',
+   'volid'   => 'local:16110/vm-16110-disk-2.vmdk',
+   },
+   {
+   'content' => 'backup',
+   'ctime'   => DEFAULT_CTIME,
+   'format'  => 'vma.gz',
+   'size'=> DEFAULT_SIZE,
+   'vmid'=> '16110',
+   'volid'   => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
+

[pve-devel] [PATCH storage v4 10/12] Fix: #2124 storage: add zstd support

2020-04-22 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm |  4 +++-
 PVE/Storage/Plugin.pm  |  2 +-
 test/archive_info_test.pm  |  4 +++-
 test/list_volumes_test.pm  | 18 ++
 test/parse_volname_test.pm |  6 +++---
 test/path_to_volume_id_test.pm | 16 
 6 files changed, 44 insertions(+), 6 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0b2745e..87550b1 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1366,10 +1366,12 @@ sub decompressor_info {
tar => {
gz => ['tar', '-z'],
lzo => ['tar', '--lzop'],
+   zst => ['tar', '--zstd'],
},
vma => {
gz => ['zcat'],
lzo => ['lzop', '-d', '-c'],
+   zst => ['zstd', '-q', '-d', '-c'],
},
 };
 
@@ -1460,7 +1462,7 @@ sub extract_vzdump_config_vma {
my $errstring;
my $err = sub {
my $output = shift;
-   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/) {
+   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error : Broken 
pipe/) {
$broken_pipe = 1;
} elsif (!defined ($errstring) && $output !~ m/^\s*$/) {
$errstring = "Failed to extract config from VMA archive: 
$output\n";
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 368d805..9623825 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -18,7 +18,7 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
-use constant COMPRESSOR_RE => 'gz|lzo';
+use constant COMPRESSOR_RE => 'gz|lzo|zst';
 
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
diff --git a/test/archive_info_test.pm b/test/archive_info_test.pm
index c9bb1b7..283fe47 100644
--- a/test/archive_info_test.pm
+++ b/test/archive_info_test.pm
@@ -45,10 +45,12 @@ my $decompressor = {
 tar => {
gz  => ['tar', '-z'],
lzo => ['tar', '--lzop'],
+   zst => ['tar', '--zstd'],
 },
 vma => {
gz  => ['zcat'],
lzo => ['lzop', '-d', '-c'],
+   zst => ['zstd', '-q', '-d', '-c'],
 },
 };
 
@@ -85,7 +87,7 @@ foreach my $virt (keys %$bkp_suffix) {
 my $non_bkp_suffix = {
 'openvz' => [ 'zip', 'tgz.lzo', 'tar.bz2', 'zip.gz', '', ],
 'lxc'=> [ 'zip', 'tgz.lzo', 'tar.bz2', 'zip.gz', '', ],
-'qemu'   => [ 'vma.xz', 'vms.gz', '', ],
+'qemu'   => [ 'vma.xz', 'vms.gz', 'vmx.zst', '', ],
 'none'   => [ 'tar.gz', ],
 };
 
diff --git a/test/list_volumes_test.pm b/test/list_volumes_test.pm
index ac0503e..84b6c08 100644
--- a/test/list_volumes_test.pm
+++ b/test/list_volumes_test.pm
@@ -93,6 +93,7 @@ my @tests = (
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma.zst",
"$storage_dir/snippets/userconfig.yaml",
"$storage_dir/snippets/hookscript.pl",
],
@@ -151,6 +152,14 @@ my @tests = (
'vmid'=> '16110',
'volid'   => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
},
+   {
+   'content' => 'backup',
+   'ctime'   => 1585595635,
+   'format'  => 'vma.zst',
+   'size'=> DEFAULT_SIZE,
+   'vmid'=> '16110',
+   'volid'   => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma.zst',
+   },
{
'content' => 'snippets',
'ctime'   => DEFAULT_CTIME,
@@ -174,6 +183,7 @@ my @tests = (
"$storage_dir/images/16112/vm-16112-disk-0.raw",
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_49_30.tar.gz",
+   "$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_49_30.tar.zst",
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_59_30.tgz",
],
expected => [
@@ -203,6 +213,14 @@ my @tests = (
'vmid'=> '16112',
'volid'   => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_49_30.tar.gz',
},
+   {
+   'content' => 'backup',
+   'ctime'   => 1585597770,
+   'format'  => 'tar.zst',
+   'size'=> DEFAULT_SIZE,
+   'vmid'=> '16112',
+   'volid'   => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_49_30.tar.zst',
+   },
{
'content

[pve-devel] [PATCH storage v4 08/12] Fix: add missing snippets subdir

2020-04-22 Thread Alwin Antreich
since it is a valid content type and adapt the path_to_volume_id_test.
Also adds an extra check if all vtype_subdirs are returned.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm |  4 
 test/path_to_volume_id_test.pm | 26 +-
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 1ef5ed2..5df074d 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -512,6 +512,7 @@ sub path_to_volume_id {
my $tmpldir = $plugin->get_subdir($scfg, 'vztmpl');
my $backupdir = $plugin->get_subdir($scfg, 'backup');
my $privatedir = $plugin->get_subdir($scfg, 'rootdir');
+   my $snippetsdir = $plugin->get_subdir($scfg, 'snippets');
 
if ($path =~ m!^$imagedir/(\d+)/([^/\s]+)$!) {
my $vmid = $1;
@@ -537,6 +538,9 @@ sub path_to_volume_id {
} elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
my $name = $1;
return ('backup', "$sid:backup/$name");
+   } elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
+   my $name = $1;
+   return ('snippets', "$sid:snippets/$name");
}
 }
 
diff --git a/test/path_to_volume_id_test.pm b/test/path_to_volume_id_test.pm
index 7d69869..e5e24c1 100644
--- a/test/path_to_volume_id_test.pm
+++ b/test/path_to_volume_id_test.pm
@@ -134,18 +134,24 @@ my @tests = (
'local:1234/subvol-1234-disk-0.subvol'
],
 },
-
-# no matches
 {
description => 'Snippets, yaml',
volname => "$storage_dir/snippets/userconfig.yaml",
-   expected => [''],
+   expected => [
+   'snippets',
+   'local:snippets/userconfig.yaml',
+   ],
 },
 {
description => 'Snippets, hookscript',
volname => "$storage_dir/snippets/hookscript.pl",
-   expected=> [''],
+   expected=> [
+   'snippets',
+   'local:snippets/hookscript.pl',
+   ],
 },
+
+# no matches
 {
description => 'CT template, tar.xz',
volname => 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.xz",
@@ -210,7 +216,10 @@ my @tests = (
 },
 );
 
-plan tests => scalar @tests;
+plan tests => scalar @tests + 1;
+
+my $seen_vtype;
+my $vtype_subdirs = { map { $_ => 1 } keys %{ 
PVE::Storage::Plugin::get_vtype_subdirs() } };
 
 foreach my $tt (@tests) {
 my $file = $tt->{volname};
@@ -232,8 +241,15 @@ foreach my $tt (@tests) {
 $got = $@ if $@;
 
 is_deeply($got, $expected, $description) || diag(explain($got));
+
+$seen_vtype->{@$expected[0]} = 1
+   if ( @$expected[0] ne '' && scalar @$expected > 1);
 }
 
+# to check if all $vtype_subdirs are defined in path_to_volume_id
+# or have a test
+is_deeply($seen_vtype, $vtype_subdirs, "vtype_subdir check");
+
 #cleanup
 # File::Temp unlinks tempdir on exit
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v4 2/2] Fix #2124: Add support for zstd

2020-04-22 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 265d4f8..fda1acb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7165,7 +7165,7 @@ sub complete_backup_archives {
 my $res = [];
 foreach my $id (keys %$data) {
foreach my $item (@{$data->{$id}}) {
-   next if $item->{format} !~ m/^vma\.(gz|lzo)$/;
+   next if $item->{format} !~ 
m/^vma\.(${\PVE::Storage::Plugin::COMPRESSOR_RE})$/;
push @$res, $item->{volid} if defined($item->{volid});
}
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 03/12] test: parse_volname

2020-04-22 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.
And to make sure that all vtype_subdirs are parsed.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/Plugin.pm  |   4 +
 test/parse_volname_test.pm | 253 +
 test/run_plugin_tests.pl   |   2 +-
 3 files changed, 258 insertions(+), 1 deletion(-)
 create mode 100644 test/parse_volname_test.pm

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index d2dfad6..71a83f7 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -457,6 +457,10 @@ my $vtype_subdirs = {
 snippets => 'snippets',
 };
 
+sub get_vtype_subdirs {
+return $vtype_subdirs;
+}
+
 sub get_subdir {
 my ($class, $scfg, $vtype) = @_;
 
diff --git a/test/parse_volname_test.pm b/test/parse_volname_test.pm
new file mode 100644
index 000..87c758c
--- /dev/null
+++ b/test/parse_volname_test.pm
@@ -0,0 +1,253 @@
+package PVE::Storage::TestParseVolname;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my $vmid = 1234;
+
+# an array of test cases, each test is comprised of the following keys:
+# description => to identify a single test
+# volname => the input for parse_volname
+# expected=> the array that parse_volname returns
+my $tests = [
+#
+# VM images
+#
+{
+   description => 'VM disk image, linked, qcow2, vm- as base-',
+   volname => 
"$vmid/vm-$vmid-disk-0.qcow2/$vmid/vm-$vmid-disk-0.qcow2",
+   expected=> [ 'images', "vm-$vmid-disk-0.qcow2", "$vmid", 
"vm-$vmid-disk-0.qcow2", "$vmid", undef, 'qcow2', ],
+},
+#
+# iso
+#
+{
+   description => 'ISO image, iso',
+   volname => 'iso/some-installation-disk.iso',
+   expected=> ['iso', 'some-installation-disk.iso'],
+},
+{
+   description => 'ISO image, img',
+   volname => 'iso/some-other-installation-disk.img',
+   expected=> ['iso', 'some-other-installation-disk.img'],
+},
+#
+# container templates
+#
+{
+   description => 'Container template tar.gz',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz',
+   expected=> ['vztmpl', 'debian-10.0-standard_10.0-1_amd64.tar.gz'],
+},
+{
+   description => 'Container template tar.xz',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz',
+   expected=> ['vztmpl', 'debian-10.0-standard_10.0-1_amd64.tar.xz'],
+},
+#
+# container rootdir
+#
+{
+   description => 'Container rootdir, sub directory',
+   volname => "rootdir/$vmid",
+   expected=> ['rootdir', "$vmid", "$vmid"],
+},
+{
+   description => 'Container rootdir, subvol',
+   volname => "$vmid/subvol-$vmid-disk-0.subvol",
+   expected=> [ 'images', "subvol-$vmid-disk-0.subvol", "$vmid", 
undef, undef, undef, 'subvol' ],
+},
+{
+   description => 'Backup archive, no virtualization type',
+   volname => "backup/vzdump-none-$vmid-2020_03_30-21_39_30.tar",
+   expected=> ['backup', "vzdump-none-$vmid-2020_03_30-21_39_30.tar"],
+},
+#
+# Snippets
+#
+{
+   description => 'Snippets, yaml',
+   volname => 'snippets/userconfig.yaml',
+   expected=> ['snippets', 'userconfig.yaml'],
+},
+{
+   description => 'Snippets, perl',
+   volname => 'snippets/hookscript.pl',
+   expected=> ['snippets', 'hookscript.pl'],
+},
+#
+# failed matches
+#
+{
+   description => "Failed match: VM disk image, base, raw",
+   volname => "/base-$vmid-disk-0.raw",
+   expected=> "unable to parse directory volume name 
'/base-$vmid-disk-0.raw'\n",
+},
+{
+   description => 'Failed match: ISO image, dvd',
+   volname => 'iso/yet-again-a-installation-disk.dvd',
+   expected=> "unable to parse directory volume name 
'iso/yet-again-a-installation-disk.dvd'\n",
+},
+{
+   description => 'Failed match: Container template, zip.gz',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.zip.gz',
+   expected=> "unable to parse directory volume name 
'vztmpl/debian-10.0-standard_10.0-1_amd64.zip.gz'\n",
+},
+{
+   description => 'Failed match: Container template, tar.bz2',
+   volname => 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.bz2',
+   expected=> "unable to parse directory volume name 
'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.bz2'\n",
+},
+{
+   description => 'Failed match: Container rootdir, subvol',
+   volname 

[pve-devel] [PATCH storage v4 01/12] storage: test: split archive format/compressor

2020-04-22 Thread Alwin Antreich
detection into separate functions so they are reusable and easier
modifiable. This patch also adds the test for archive_info.

Signed-off-by: Alwin Antreich 
---
 test/Makefile |   5 +-
 PVE/Storage.pm|  79 +---
 test/archive_info_test.pm | 125 ++
 test/run_plugin_tests.pl  |  12 
 4 files changed, 199 insertions(+), 22 deletions(-)
 create mode 100644 test/archive_info_test.pm
 create mode 100755 test/run_plugin_tests.pl

diff --git a/test/Makefile b/test/Makefile
index 833a597..c54b10f 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,6 @@
 all: test
 
-test: test_zfspoolplugin test_disklist test_bwlimit
+test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
 
 test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
@@ -10,3 +10,6 @@ test_disklist: run_disk_tests.pl
 
 test_bwlimit: run_bwlimit_tests.pl
./run_bwlimit_tests.pl
+
+test_plugin: run_plugin_tests.pl
+   ./run_plugin_tests.pl
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0848176..bdd6ebc 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1351,6 +1351,53 @@ sub foreach_volid {
 }
 }
 
+sub decompressor_info {
+my ($format, $comp) = @_;
+
+if ($format eq 'tgz' && !defined($comp)) {
+   ($format, $comp) = ('tar', 'gz');
+}
+
+my $decompressor = {
+   tar => {
+   gz => ['tar', '-z'],
+   lzo => ['tar', '--lzop'],
+   },
+   vma => {
+   gz => ['zcat'],
+   lzo => ['lzop', '-d', '-c'],
+   },
+};
+
+die "ERROR: archive format not defined\n"
+   if !defined($decompressor->{$format});
+
+my $decomp = $decompressor->{$format}->{$comp} if $comp;
+
+my $info = {
+   format => $format,
+   compression => $comp,
+   decompressor => $decomp,
+};
+
+return $info;
+}
+
+sub archive_info {
+my ($archive) = shift;
+my $info;
+
+my $volid = basename($archive);
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+   $info = decompressor_info($2, $3);
+   $info->{type} = $1;
+} else {
+   die "ERROR: couldn't determine format and compression type\n";
+}
+
+return $info;
+}
+
 sub extract_vzdump_config_tar {
 my ($archive, $conf_re) = @_;
 
@@ -1396,16 +1443,12 @@ sub extract_vzdump_config_vma {
 };
 
 
+my $info = archive_info($archive);
+$comp //= $info->{compression};
+my $decompressor = $info->{decompressor};
+
 if ($comp) {
-   my $uncomp;
-   if ($comp eq 'gz') {
-   $uncomp = ["zcat", $archive];
-   } elsif ($comp eq 'lzo') {
-   $uncomp = ["lzop", "-d", "-c", $archive];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
-   $cmd = [$uncomp, ["vma", "config", "-"]];
+   $cmd = [ [@$decompressor, $archive], ["vma", "config", "-"] ];
 
# in some cases, lzop/zcat exits with 1 when its stdout pipe is
# closed early by vma, detect this and ignore the exit code later
@@ -1455,20 +1498,14 @@ sub extract_vzdump_config {
 }
 
 my $archive = abs_filesystem_path($cfg, $volid);
+my $info = archive_info($archive);
+my $format = $info->{format};
+my $comp = $info->{compression};
+my $type = $info->{type};
 
-if ($volid =~ 
/vzdump-(lxc|openvz)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|(tar(\.(gz|lzo))?))$/)
 {
+if ($type eq 'lxc' || $type eq 'openvz') {
return extract_vzdump_config_tar($archive, 
qr!^(\./etc/vzdump/(pct|vps)\.conf)$!);
-} elsif ($volid =~ 
/vzdump-qemu-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
-   my $format;
-   my $comp;
-   if ($7 eq 'tgz') {
-   $format = 'tar';
-   $comp = 'gz';
-   } else {
-   $format = $9;
-   $comp = $11 if defined($11);
-   }
-
+} elsif ($type eq 'qemu') {
if ($format eq 'tar') {
return extract_vzdump_config_tar($archive, 
qr!\(\./qemu-server\.conf\)!);
} else {
diff --git a/test/archive_info_test.pm b/test/archive_info_test.pm
new file mode 100644
index 000..c9bb1b7
--- /dev/null
+++ b/test/archive_info_test.pm
@@ -0,0 +1,125 @@
+package PVE::Storage::TestArchiveInfo;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my $vmid = 16110;
+
+# an array of test cases, each test is comprised of the following keys:
+# description => to identify a single test
+# archive => the input filename for archive_info
+# expected=> the hash that archive_info returns
+#
+# most of them are created

[pve-devel] [PATCH storage v4 07/12] Fix: path_to_volume_id returned wrong content

2020-04-22 Thread Alwin Antreich
type for backup files. Patch includes changes of the test as well.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm | 2 +-
 test/path_to_volume_id_test.pm | 8 
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index bdd6ebc..1ef5ed2 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -536,7 +536,7 @@ sub path_to_volume_id {
return ('rootdir', "$sid:rootdir/$vmid");
} elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
my $name = $1;
-   return ('iso', "$sid:backup/$name");
+   return ('backup', "$sid:backup/$name");
}
 }
 
diff --git a/test/path_to_volume_id_test.pm b/test/path_to_volume_id_test.pm
index 744c3ee..7d69869 100644
--- a/test/path_to_volume_id_test.pm
+++ b/test/path_to_volume_id_test.pm
@@ -72,7 +72,7 @@ my @tests = (
description => 'Backup, vma.gz',
volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz',
],
 },
@@ -80,7 +80,7 @@ my @tests = (
description => 'Backup, vma.lzo',
volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo',
],
 },
@@ -88,7 +88,7 @@ my @tests = (
description => 'Backup, vma',
volname => 
"$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma',
],
 },
@@ -96,7 +96,7 @@ my @tests = (
description => 'Backup, tar.lzo',
volname => 
"$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",
expected=> [
-   'iso',
+   'backup',
'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo',
],
 },
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 09/12] backup: compact regex for backup file filter

2020-04-22 Thread Alwin Antreich
the more compact form of the regex should allow easier addition of new
file extensions.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 4 ++--
 PVE/Storage/Plugin.pm | 6 --
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 5df074d..0b2745e 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -535,7 +535,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
my $name = $1;
return ('backup', "$sid:backup/$name");
} elsif ($path =~ m!^$snippetsdir/([^/]+)$!) {
@@ -1392,7 +1392,7 @@ sub archive_info {
 my $info;
 
 my $volid = basename($archive);
-if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?$/)
 {
$info = decompressor_info($2, $3);
$info->{type} = $1;
 } else {
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 9dde46e..368d805 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -18,6 +18,8 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
+use constant COMPRESSOR_RE => 'gz|lzo';
+
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
 -p --sparse --numeric-owner --acls
@@ -435,7 +437,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo)))$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(?:\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\COMPRESSOR_RE}))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -939,7 +941,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
+   next if $fn !~ 
m!/([^/]+\.(tgz|(?:(?:tar|vma)(?:\.(${\COMPRESSOR_RE}))?)))$!;
 
my $format = $2;
$fn = $1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v4] Fix #2124: Add support for zstd

2020-04-22 Thread Alwin Antreich
This patch adds the zstd to the compression selection for backup on the
GUI and add .zst to the backup file filter. Including zstd as package
install dependency.

Signed-off-by: Alwin Antreich 
---
 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index f3274196..80f4734c 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -609,6 +609,13 @@ sub compressor_info {
} else {
return ('gzip --rsyncable', 'gz');
}
+} elsif ($opt_compress eq 'zstd') {
+   my $zstd_threads = $opts->{zstd} // 1;
+   if ($zstd_threads == 0) {
+   my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
+   $zstd_threads = int(($cpuinfo->{cpus} + 1)/2);
+   }
+   return ("zstd --threads=${zstd_threads}", 'zst');
 } else {
die "internal error - unknown compression option '$opt_compress'";
 }
@@ -620,7 +627,7 @@ sub get_backup_file_list {
 my $bklist = [];
 foreach my $fn (<$dir/${bkname}-*>) {
next if $exclude_fn && $fn eq $exclude_fn;
-   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
 {
+   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
$fn = "$dir/$1"; # untaint
my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
push @$bklist, [$fn, $t];
@@ -928,7 +935,7 @@ sub exec_backup_task {
debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
unlink $d->[0];
my $logfn = $d->[0];
-   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
+   $logfn =~ 
s/\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?))$/\.log/;
unlink $logfn;
}
}
diff --git a/debian/control b/debian/control
index edb2833d..318b4f0e 100644
--- a/debian/control
+++ b/debian/control
@@ -60,6 +60,7 @@ Depends: apt-transport-https | apt (>= 1.5~),
  logrotate,
  lsb-base,
  lzop,
+ zstd,
  novnc-pve,
  pciutils,
  perl (>= 5.10.0-19),
diff --git a/www/manager6/form/CompressionSelector.js 
b/www/manager6/form/CompressionSelector.js
index 8938fc0e..842b7710 100644
--- a/www/manager6/form/CompressionSelector.js
+++ b/www/manager6/form/CompressionSelector.js
@@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
 comboItems: [
 ['0', Proxmox.Utils.noneText],
 ['lzo', 'LZO (' + gettext('fast') + ')'],
-['gzip', 'GZIP (' + gettext('good') + ')']
+['gzip', 'GZIP (' + gettext('good') + ')'],
+['zstd', 'ZSTD (' + gettext('better') + ')'],
 ]
 });
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v4 02/12] storage: replace build-in stat with File::stat

2020-04-22 Thread Alwin Antreich
to minimize variable declarations. And allow to mock this method in
tests instead of the perl build-in stat.

Signed-off-by: Alwin Antreich 
---
 PVE/Diskmanage.pm |  9 +
 PVE/Storage/Plugin.pm | 34 ++
 2 files changed, 15 insertions(+), 28 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 13e7cd8..cac944d 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -6,6 +6,7 @@ use PVE::ProcFSTools;
 use Data::Dumper;
 use Cwd qw(abs_path);
 use Fcntl ':mode';
+use File::stat;
 use JSON;
 
 use PVE::Tools qw(extract_param run_command file_get_contents 
file_read_firstline dir_glob_regex dir_glob_foreach trim);
@@ -673,11 +674,11 @@ sub get_disks {
 sub get_partnum {
 my ($part_path) = @_;
 
-my ($mode, $rdev) = (stat($part_path))[2,6];
+my $st = stat($part_path);
 
-next if !$mode || !S_ISBLK($mode) || !$rdev;
-my $major = PVE::Tools::dev_t_major($rdev);
-my $minor = PVE::Tools::dev_t_minor($rdev);
+next if !$st->mode || !S_ISBLK($st->mode) || !$st->rdev;
+my $major = PVE::Tools::dev_t_major($st->rdev);
+my $minor = PVE::Tools::dev_t_minor($st->rdev);
 my $partnum_path = "/sys/dev/block/$major:$minor/";
 
 my $partnum;
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 4489a77..d2dfad6 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -7,6 +7,7 @@ use Fcntl ':mode';
 use File::chdir;
 use File::Path;
 use File::Basename;
+use File::stat;
 use Time::Local qw(timelocal);
 
 use PVE::Tools qw(run_command);
@@ -718,12 +719,10 @@ sub free_image {
 sub file_size_info {
 my ($filename, $timeout) = @_;
 
-my @fs = stat($filename);
-my $mode = $fs[2];
-my $ctime = $fs[10];
+my $st = stat($filename);
 
-if (S_ISDIR($mode)) {
-   return wantarray ? (0, 'subvol', 0, undef, $ctime) : 1;
+if (S_ISDIR($st->mode)) {
+   return wantarray ? (0, 'subvol', 0, undef, $st->ctime) : 1;
 }
 
 my $json = '';
@@ -741,7 +740,7 @@ sub file_size_info {
 
 my ($size, $format, $used, $parent) = $info->@{qw(virtual-size format 
actual-size backing-filename)};
 
-return wantarray ? ($size, $format, $used, $parent, $ctime) : $size;
+return wantarray ? ($size, $format, $used, $parent, $st->ctime) : $size;
 }
 
 sub volume_size_info {
@@ -918,22 +917,9 @@ my $get_subdir_files = sub {
 
 foreach my $fn (<$path/*>) {
 
-   my ($dev,
-   $ino,
-   $mode,
-   $nlink,
-   $uid,
-   $gid,
-   $rdev,
-   $size,
-   $atime,
-   $mtime,
-   $ctime,
-   $blksize,
-   $blocks
-   ) = stat($fn);
-
-   next if S_ISDIR($mode);
+   my $st = stat($fn);
+
+   next if S_ISDIR($st->mode);
 
my $info;
 
@@ -972,8 +958,8 @@ my $get_subdir_files = sub {
};
}
 
-   $info->{size} = $size;
-   $info->{ctime} //= $ctime;
+   $info->{size} = $st->size;
+   $info->{ctime} //= $st->ctime;
 
push @$res, $info;
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v4 1/2] restore: replace archive format/compression

2020-04-22 Thread Alwin Antreich
regex to reduce the code duplication, as archive_info and
decompressor_info provides the same information as well.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 36 ++--
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 37c7320..265d4f8 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5627,28 +5627,9 @@ sub tar_restore_cleanup {
 sub restore_file_archive {
 my ($archive, $vmid, $user, $opts) = @_;
 
-my $format = $opts->{format};
-my $comp;
-
-if ($archive =~ m/\.tgz$/ || $archive =~ m/\.tar\.gz$/) {
-   $format = 'tar' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.tar$/) {
-   $format = 'tar' if !$format;
-} elsif ($archive =~ m/.tar.lzo$/) {
-   $format = 'tar' if !$format;
-   $comp = 'lzop';
-} elsif ($archive =~ m/\.vma$/) {
-   $format = 'vma' if !$format;
-} elsif ($archive =~ m/\.vma\.gz$/) {
-   $format = 'vma' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.vma\.lzo$/) {
-   $format = 'vma' if !$format;
-   $comp = 'lzop';
-} else {
-   $format = 'vma' if !$format; # default
-}
+my $info = PVE::Storage::archive_info($archive);
+my $format = $opts->{format} // $info->{format};
+my $comp = $info->{compression};
 
 # try to detect archive format
 if ($format eq 'tar') {
@@ -6235,14 +6216,9 @@ sub restore_vma_archive {
 }
 
 if ($comp) {
-   my $cmd;
-   if ($comp eq 'gzip') {
-   $cmd = ['zcat', $readfrom];
-   } elsif ($comp eq 'lzop') {
-   $cmd = ['lzop', '-d', '-c', $readfrom];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
+   my $info = PVE::Storage::decompressor_info('vma', $comp);
+   my $cmd = $info->{decompressor};
+   push @$cmd, $readfrom;
$add_pipe->($cmd);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common v4] Fix: #2124 add zstd support

2020-04-22 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..909e3af 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -88,7 +88,7 @@ my $confdesc = {
type => 'string',
description => "Compress dump file.",
optional => 1,
-   enum => ['0', '1', 'gzip', 'lzo'],
+   enum => ['0', '1', 'gzip', 'lzo', 'zstd'],
default => '0',
 },
 pigz=> {
@@ -98,6 +98,13 @@ my $confdesc = {
optional => 1,
default => 0,
 },
+zstd => {
+   type => "integer",
+   description => "Zstd threads. N=0 uses half of the available cores,".
+   " N>0 uses N as thread count.",
+   optional => 1,
+   default => 1,
+},
 quiet => {
type => 'boolean',
description => "Be quiet.",
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 0/13] Fix: #2124 zstd

2020-04-09 Thread Alwin Antreich
On Thu, Apr 09, 2020 at 02:29:46PM +0200, Fabian Ebner wrote:
> Hi,
> series looks mostly good to me. Some comments on individual patches.
> Backup/restore seems to work, also still with the other compression formats.
> The tests for path_to_volume_id actually uncover a bug and a missing feature
> in the implementation in Storage.pm, which should be fixed.
I'll look into it.

> 
> For the patches that won't change (much):
> Reviewed-By: Fabian Ebner 
> Tested-By: Fabian Ebner 
Thanks for review and testing.

> 
> For a potential follow-up, I feel like the parsing of the backup filename
> might need its own method, rather than have all those pattern matchings in
> different places.
That was my intent with archive_info, it could be extend later on. :)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage v3 5/7] test: path_to_volume_id

2020-04-09 Thread Alwin Antreich
On Thu, Apr 09, 2020 at 02:20:36PM +0200, Fabian Ebner wrote:
> Two comments inline.
> 
> On 08.04.20 12:26, Alwin Antreich wrote:
> > Test to reduce the potential for accidental breakage on regex changes.
> > 
> > Signed-off-by: Alwin Antreich 
> > ---
> >   test/run_parser_tests.pl   |   2 +-
> >   test/test_path_to_volume_id.pm | 102 +
> >   2 files changed, 103 insertions(+), 1 deletion(-)
> >   create mode 100644 test/test_path_to_volume_id.pm
> > 
> > diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
> > index 4b1c003..635b59d 100755
> > --- a/test/run_parser_tests.pl
> > +++ b/test/run_parser_tests.pl
> > @@ -10,7 +10,7 @@ my $res = $harness->runtests(
> >   "test_archive_info.pm",
> >   "test_parse_volname.pm",
> >   "test_list_volumes.pm",
> > +"test_path_to_volume_id.pm",
> >   );
> >   exit -1 if !$res || $res->{failed} || $res->{parse_errors};
> > -
> > diff --git a/test/test_path_to_volume_id.pm b/test/test_path_to_volume_id.pm
> > new file mode 100644
> > index 000..e693974
> > --- /dev/null
> > +++ b/test/test_path_to_volume_id.pm
> > @@ -0,0 +1,102 @@
> > +package PVE::Storage::TestPathToVolumeId;
> > +
> > +use strict;
> > +use warnings;
> > +
> > +use lib qw(..);
> > +
> > +use PVE::Storage;
> > +
> > +use Test::More;
> > +
> > +use Cwd;
> > +use File::Basename;
> > +use File::Path qw(make_path remove_tree);
> > +
> > +my $storage_dir = getcwd() . '/test_path_to_volume_id';
> > +my $scfg = {
> > +'digest' => 'd29306346b8b25b90a4a96165f1e8f52d1af1eda',
> > +'ids' => {
> > +   'local' => {
> > +   'shared' => 0,
> > +   'path' => "$storage_dir",
> > +   'type' => 'dir',
> > +   'content' => {
> > +   'snippets' => 1,
> > +   'rootdir' => 1,
> > +   'images' => 1,
> > +   'iso' => 1,
> > +   'backup' => 1,
> > +   'vztmpl' => 1
> > +   },
> > +   'maxfiles' => 0
> > +   }
> > +},
> > +'order' => {
> > +   'local' => 1
> > +}
> > +};
> > +
> > +my @tests = (
> > +   [ "$storage_dir/images/16110/vm-16110-disk-0.qcow2", ['images', 
> > 'local:16110/vm-16110-disk-0.qcow2'], 'Image, qcow2' ],
> > +   [ "$storage_dir/images/16112/vm-16112-disk-0.raw",   ['images', 
> > 'local:16112/vm-16112-disk-0.raw'],   'Image, raw' ],
> > +   [ "$storage_dir/images/9004/base-9004-disk-0.qcow2", ['images', 
> > 'local:9004/base-9004-disk-0.qcow2'], 'Image template, qcow2' ],
> > +
> > +   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",  
> > ['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz'],  
> > 'Backup, vma.gz' ],
> > +   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo", 
> > ['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo'], 
> > 'Backup, vma.lzo' ],
> > +   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma", 
> > ['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma'], 
> > 'Backup, vma' ],
> > +   [ "$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",  
> > ['iso', 'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo'],  
> > 'Backup, tar.lzo' ],
> > +
> 
> Here it shouldn't be 'iso', but 'backup' (compare with parse_volname in
> Storage/Plugin.pm). This is actually a bug in the implementation in
> Storage.pm.
Thanks for confirming my suspicion. I'll look into it.

> 
> > +   [ "$storage_dir/template/iso/yet-again-a-installation-disk.iso",
> >   ['iso', 'local:iso/yet-again-a-installation-disk.iso'],  'ISO 
> > file' ],
> > +   [ 
> > "$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz", 
> > ['vztmpl', 'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz'], 'CT 
> > template, tar.gz' ],
> > +
> > +   [ "$storage_dir/private/1234/",  ['rootdir', 
> > 'local:rootdir/1234'],  'Rootdir' ], # fileparse needs / at 
> > the end
> > +   [ "$storage_dir/images/1234/subvol-1234-disk-0.subvol/", ['images', 
> > 'local:1234/subvol-1234-disk-0.subvol'], 'Rootdir, folder subvol' ], # 
> > fileparse needs / at the end

Re: [pve-devel] [PATCH storage 1/4] base plugin: Increase test coverage

2020-04-09 Thread Alwin Antreich
On Thu, Apr 02, 2020 at 01:34:11PM +0200, Dominic Jäger wrote:
> Signed-off-by: Dominic Jäger 
> ---
> This did not exist separately in RFC
> 
>  test/Makefile|   5 +-
>  test/run_plugin_tests.pl | 184 +++
>  2 files changed, 188 insertions(+), 1 deletion(-)
>  create mode 100755 test/run_plugin_tests.pl
> 
> diff --git a/test/Makefile b/test/Makefile
> index 833a597..c54b10f 100644
> --- a/test/Makefile
> +++ b/test/Makefile
> @@ -1,6 +1,6 @@
>  all: test
>  
> -test: test_zfspoolplugin test_disklist test_bwlimit
> +test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
>  
>  test_zfspoolplugin: run_test_zfspoolplugin.pl
>   ./run_test_zfspoolplugin.pl
> @@ -10,3 +10,6 @@ test_disklist: run_disk_tests.pl
>  
>  test_bwlimit: run_bwlimit_tests.pl
>   ./run_bwlimit_tests.pl
> +
> +test_plugin: run_plugin_tests.pl
> + ./run_plugin_tests.pl
> diff --git a/test/run_plugin_tests.pl b/test/run_plugin_tests.pl
> new file mode 100755
> index 000..cd93430
> --- /dev/null
> +++ b/test/run_plugin_tests.pl
> @@ -0,0 +1,184 @@
> +#!/usr/bin/perl
> +use strict;
> +use warnings;
> +
> +use lib ('.', '..');
> +use Test::More tests => 32;
> +use Test::MockModule qw(new);
> +use File::Temp qw(tempdir);
> +use File::Path qw(make_path);
> +use Data::Dumper qw(Dumper);
> +use Storable qw(dclone);
> +use PVE::Storage;
> +
> +my $plugin = 'PVE::Storage::Plugin';
> +my $basename = 'test';
> +
> +my $iso_type = 'iso';
> +my $iso_suffix = '.iso';
> +my $iso_notdir = "$basename$iso_suffix";
> +my $iso_volname = "$iso_type/$iso_notdir";
> +
> +my $vztmpl_type = 'vztmpl';
> +my $vztmpl_suffix = '.tar.gz';
> +my $vztmpl_notdir = "$basename$vztmpl_suffix";
> +my $vztmpl_volname = "$vztmpl_type/$vztmpl_notdir";
> +
> +my $iso_with_dots = "$iso_type/../$iso_notdir";
> +my $vztmpl_with_dots = "$vztmpl_type/../$vztmpl_notdir";
> +
> +my $image_type = 'images';
> +my $vmid = '100';
> +my $image_basename = 'vm-100-disk-0';
> +my $raw_image_format = 'raw'; # Tests for parse_volname don't need the dot
> +my $raw_image_notdir = "$image_basename.$raw_image_format";
> +my $raw_image_volname = "$vmid/$raw_image_notdir";
> +my $qcow_image_format = 'qcow2';
> +my $qcow_image_notdir = "$image_basename.$qcow_image_format";
> +my $qcow_image_volname = "$vmid/$qcow_image_notdir";
> +my $vmdk_image_format = 'vmdk';
> +my $vmdk_image_notdir = "$image_basename.$vmdk_image_format";
> +my $vmdk_image_volname = "$vmid/$vmdk_image_notdir";
> +
> +my $type_index = 0;
> +my $notdir_index = 1;
> +my $format_index = 6;
> +
> +is (($plugin->parse_volname($iso_volname))[$type_index],
> +$iso_type, 'parse_volname: type for iso');
> +is (($plugin->parse_volname($iso_volname))[$notdir_index],
> +$iso_notdir, 'parse_volname: notdir for iso');
> +
> +is (($plugin->parse_volname($vztmpl_volname))[$type_index],
> +$vztmpl_type, 'parse_volname: type for vztmpl');
> +is (($plugin->parse_volname($vztmpl_volname))[$notdir_index],
> +$vztmpl_notdir, 'parse_volname: notdir for vztmpl');
> +
> +is (($plugin->parse_volname($raw_image_volname))[$type_index],
> +$image_type, 'parse_volname: type for raw image');
> +is (($plugin->parse_volname($raw_image_volname))[$notdir_index],
> +$raw_image_notdir, 'parse_volname: notdir for raw image');
> +is (($plugin->parse_volname($raw_image_volname))[$format_index],
> +$raw_image_format, 'parse_volname: format for raw image');
> +
> +is (($plugin->parse_volname($qcow_image_volname))[$type_index],
> +$image_type, 'parse_volname: type for qcow image');
> +is (($plugin->parse_volname($qcow_image_volname))[$notdir_index],
> +$qcow_image_notdir, 'parse_volname: notdir for qcow image');
> +is (($plugin->parse_volname($qcow_image_volname))[$format_index],
> +$qcow_image_format, 'parse_volname: format for qcow image');
> +
> +is (($plugin->parse_volname($vmdk_image_volname))[$type_index],
> +$image_type, 'parse_volname: type for vmdk image');
> +is (($plugin->parse_volname($vmdk_image_volname))[$notdir_index],
> +$vmdk_image_notdir, 'parse_volname: notdir for vmdk image');
> +is (($plugin->parse_volname($vmdk_image_volname))[$format_index],
> +$vmdk_image_format, 'parse_volname: format for vmdk image');
> +
> +
> +my $scfg_with_path = { path => '/some/path' };
> +is ($plugin->get_subdir($scfg_with_path, 'iso'),
> +"$scfg_with_path->{path}/template/iso", 'get_subdir for iso' );
> +is ($plugin->get_subdir($scfg_with_path, 'vztmpl'),
> +"$scfg_with_path->{path}/template/cache", 'get_subdir for vztmpl');
> +is ($plugin->get_subdir($scfg_with_path, 'backup'),
> +"$scfg_with_path->{path}/dump", 'get_subdir for backup');
> +is ($plugin->get_subdir($scfg_with_path, 'images'),
> +"$scfg_with_path->{path}/images", 'get_subdir for images');
> +is ($plugin->get_subdir($scfg_with_path, 'rootdir'),
> +"$scfg_with_path->{path}/private", 'get_subdir for rootdir');
> +
> +is ($plugin->filesystem_path($scfg_with_path, 

[pve-devel] [PATCH storage v3 7/7] Fix: #2124 storage: add zstd support

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm | 4 +++-
 PVE/Storage/Plugin.pm  | 2 +-
 test/test_archive_info.pm  | 9 ++---
 test/test_list_volumes.pm  | 4 
 test/test_parse_volname.pm | 3 +++
 test/test_path_to_volume_id.pm | 2 ++
 6 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 46384ff..df477a7 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1295,10 +1295,12 @@ sub decompressor_info {
tar => {
gz => ['tar', '-z'],
lzo => ['tar', '--lzop'],
+   zst => ['tar', '--zstd'],
},
vma => {
gz => ['zcat'],
lzo => ['lzop', '-d', '-c'],
+   zst => ['zstd', '-q', '-d', '-c'],
},
 };
 
@@ -1389,7 +1391,7 @@ sub extract_vzdump_config_vma {
my $errstring;
my $err = sub {
my $output = shift;
-   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/) {
+   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error : Broken 
pipe/) {
$broken_pipe = 1;
} elsif (!defined ($errstring) && $output !~ m/^\s*$/) {
$errstring = "Failed to extract config from VMA archive: 
$output\n";
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index ef6e6de..ec434a4 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -17,7 +17,7 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
-use constant COMPRESSOR_RE => 'gz|lzo';
+use constant COMPRESSOR_RE => 'gz|lzo|zst';
 
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
diff --git a/test/test_archive_info.pm b/test/test_archive_info.pm
index 464cc89..cd126f8 100644
--- a/test/test_archive_info.pm
+++ b/test/test_archive_info.pm
@@ -10,19 +10,22 @@ use Test::More;
 
 my @tests = (
 # backup archives
-[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => undef, 'compression' => undef },   
 'Backup archive, vma' ],
-[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['zcat'], 'compression' => 'gz' }, 
 'Backup archive, vma, gz' ],
-[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['lzop', '-d', '-c'], 
'compression' => 'lzo' }, 'Backup archive, vma, lzo' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => undef, 'compression' => undef },   
   'Backup archive, vma' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['zcat'], 'compression' => 'gz' }, 
   'Backup archive, vma, gz' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['lzop', '-d', '-c'], 
'compression' => 'lzo' },   'Backup archive, vma, lzo' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.zst', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['zstd', '-q', '-d', '-c'], 
'compression' => 'zst' }, 'Backup archive, vma, zst' ],
 
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => undef, 'compression' => undef },
 'Backup archive, lxc' ],
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.gz',  { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '-z'], 'compression' => 
'gz' },  'Backup archive, lxc, gz' ],
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tgz', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '-z'], 'compression' => 
'gz' },  'Backup archive, lxc, tgz' ],
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '--lzop'], 'compression' => 
'lzo' }, 'Backup archive, lxc, lzo' ],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.zst', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '--zstd'], 'compression' => 
'zst' }, 'Backup archive, lxc, zst' ],
 
 [ 'backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar', { 'type' => 
'openvz', 'format' => 'tar', 'decompressor' => undef, 'compression' => undef }, 
'Backup archive, openvz' ],
 [ 'backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar.gz',  { 'type' => 
'openvz', 'format' => 'tar', 'decompressor' => ['tar', '-z'], 'compression' => 
'gz' },  'Backup archive, openvz, gz' ],
 [ 'backup/vzdump-openvz-1611

[pve-devel] [PATCH qemu-server v3 1/2] restore: replace archive

2020-04-08 Thread Alwin Antreich
format/compression regex to reduce the code duplication, as archive_info
and decompressor_info provides the same information as well.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 36 ++--
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 510a995..e5bf41b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5501,28 +5501,9 @@ sub tar_restore_cleanup {
 sub restore_file_archive {
 my ($archive, $vmid, $user, $opts) = @_;
 
-my $format = $opts->{format};
-my $comp;
-
-if ($archive =~ m/\.tgz$/ || $archive =~ m/\.tar\.gz$/) {
-   $format = 'tar' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.tar$/) {
-   $format = 'tar' if !$format;
-} elsif ($archive =~ m/.tar.lzo$/) {
-   $format = 'tar' if !$format;
-   $comp = 'lzop';
-} elsif ($archive =~ m/\.vma$/) {
-   $format = 'vma' if !$format;
-} elsif ($archive =~ m/\.vma\.gz$/) {
-   $format = 'vma' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.vma\.lzo$/) {
-   $format = 'vma' if !$format;
-   $comp = 'lzop';
-} else {
-   $format = 'vma' if !$format; # default
-}
+my $info = PVE::Storage::archive_info($archive);
+my $format = $opts->{format} // $info->{format};
+my $comp = $info->{compression};
 
 # try to detect archive format
 if ($format eq 'tar') {
@@ -6109,14 +6090,9 @@ sub restore_vma_archive {
 }
 
 if ($comp) {
-   my $cmd;
-   if ($comp eq 'gzip') {
-   $cmd = ['zcat', $readfrom];
-   } elsif ($comp eq 'lzop') {
-   $cmd = ['lzop', '-d', '-c', $readfrom];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
+   my $info = PVE::Storage::decompressor_info('vma', $comp);
+   my $cmd = $info->{decompressor};
+   push @$cmd, $readfrom;
$add_pipe->($cmd);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 6/7] backup: more compact regex for backup filter

2020-04-08 Thread Alwin Antreich
The more compact form of the regex should allow easier addition of new
file extensions.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 4 ++--
 PVE/Storage/Plugin.pm | 6 --
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0bbd168..46384ff 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -519,7 +519,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
my $name = $1;
return ('iso', "$sid:backup/$name");
}
@@ -1321,7 +1321,7 @@ sub archive_info {
 my $info;
 
 my $volid = basename($archive);
-if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?$/)
 {
$info = decompressor_info($2, $3);
$info->{type} = $1;
 } else {
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 0ab44ce..ef6e6de 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -17,6 +17,8 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
+use constant COMPRESSOR_RE => 'gz|lzo';
+
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
 -p --sparse --numeric-owner --acls
@@ -434,7 +436,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo)))$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(?:\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\COMPRESSOR_RE}))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -949,7 +951,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
+   next if $fn !~ 
m!/([^/]+\.(tgz|(?:(?:tar|vma)(?:\.(${\COMPRESSOR_RE}))?)))$!;
 
my $format = $2;
$fn = $1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 3/7] test: list_volumes

2020-04-08 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/run_parser_tests.pl  |   6 +-
 test/test_list_volumes.pm | 309 ++
 2 files changed, 314 insertions(+), 1 deletion(-)
 create mode 100644 test/test_list_volumes.pm

diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
index 79093aa..4b1c003 100755
--- a/test/run_parser_tests.pl
+++ b/test/run_parser_tests.pl
@@ -6,7 +6,11 @@ use warnings;
 use TAP::Harness;
 
 my $harness = TAP::Harness->new( { verbosity => -1 });
-my $res = $harness->runtests("test_archive_info.pm", "test_parse_volname.pm");
+my $res = $harness->runtests(
+"test_archive_info.pm",
+"test_parse_volname.pm",
+"test_list_volumes.pm",
+);
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
 
diff --git a/test/test_list_volumes.pm b/test/test_list_volumes.pm
new file mode 100644
index 000..4b3fdb7
--- /dev/null
+++ b/test/test_list_volumes.pm
@@ -0,0 +1,309 @@
+package PVE::Storage::TestListVolumes;
+
+BEGIN {
+use constant DEFAULT_SIZE => 131072; # 128 kiB
+use constant DEFAULT_USED => 262144; # 256 kiB
+use constant DEFAULT_CTIME => 1234567890;
+
+# override the build-in stat routine to fix output values
+# used in $get_subdir_files
+*CORE::GLOBAL::stat = sub {
+   my ($fn) = shift;
+   my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime,
+   $mtime, $ctime, $blksize, $blocks) = CORE::stat($fn);
+
+   # fixed: file creation time
+   $ctime = DEFAULT_CTIME;
+   $size = DEFAULT_SIZE;
+
+   return ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime,
+   $mtime, $ctime, $blksize, $blocks);
+};
+}
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use PVE::Cluster;
+use PVE::Tools qw(run_command);
+
+use Test::More;
+use Test::MockModule;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+
+# get_vmlist() return values
+my $vmlist = {
+'version' => 1,
+'ids' => {
+   '16110' => {
+   'version' => 4,
+   'node' => 'x42',
+   'type' => 'qemu'
+   },
+   '16112' => {
+   'type' => 'lxc',
+   'version' => 7,
+   'node' => 'x42'
+   },
+   '16114' => {
+   'type' => 'qemu',
+   'node' => 'x42',
+   'version' => 2
+   },
+   '16113' => {
+   'version' => 5,
+   'node' => 'x42',
+   'type' => 'qemu'
+   },
+   '16115' => {
+   'node' => 'x42',
+   'version' => 1,
+   'type' => 'qemu'
+   },
+   '9004' => {
+   'type' => 'qemu',
+   'version' => 6,
+   'node' => 'x42'
+   }
+}
+};
+
+my $storage_dir = getcwd() . '/test_list_volumes';
+my $scfg = {
+'type' => 'dir',
+'maxfiles' => 0,
+'path' => $storage_dir,
+'shared' => 0,
+'content' => {
+   'iso' => 1,
+   'rootdir' => 1,
+   'vztmpl' => 1,
+   'images' => 1,
+   'snippets' => 1,
+   'backup' => 1,
+},
+};
+my @tests = (
+{
+   vmid => '16110',
+   files => [
+   "$storage_dir/images/16110/vm-16110-disk-0.qcow2",
+   "$storage_dir/images/16110/vm-16110-disk-1.qcow2",
+   "$storage_dir/images/16110/vm-16110-disk-2.qcow2",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   "$storage_dir/snippets/userconfig.yaml",
+   "$storage_dir/snippets/hookscript.pl",
+   ],
+   expected => [
+   { 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-0.qcow2' },
+   { 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-1.qcow2' },
+   { 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-2.qcow2' },
+   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz' }

[pve-devel] [PATCH v3 0/13] Fix: #2124 zstd

2020-04-08 Thread Alwin Antreich
Zstandard (zstd) [0] is a data compression algorithm, in addition to
gzip, lzo for our backup/restore. It can utilize multiple core CPUs. But
by default it has one compression and one writer thread.


Here some quick tests I made on my workstation. The files where placed
on a ram disk. And with dd filled from /dev/urandom and /dev/zero.

__Compression__
file size: 1073741824 bytes
  = urandom = = zero =
  995ms 1073766414328ms 98192zstd -k
  732ms 1073766414295ms 98192zstd -k -T4
  906ms 1073791036562ms 4894779  lzop -k
31992ms 1073915558   5594ms 1042087  gzip -k
30832ms 1074069541   5776ms 1171491  pigz -k -p 1
 7814ms 1074069541   1567ms 1171491  pigz -k -p 4

__Decompression__
file size: 1073741824 bytes
= urandom =   = zero =
   712ms  869ms  zstd -d
   685ms  872ms  zstd -k -d -T4
   841ms 2462ms  lzop -d
  5417ms 4754ms  gzip -k -d
  1248ms 3118ms  pigz -k -d -p 1
  1236ms 2379ms  pigz -k -d -p 4


And I used the same ramdisk to move a VM onto it and run a quick
backup/restore.

__vzdump backup__
INFO: transferred 34359 MB in 69 seconds (497 MB/s) zstd -T1
INFO: transferred 34359 MB in 37 seconds (928 MB/s) zstd -T4
INFO: transferred 34359 MB in 51 seconds (673 MB/s) lzo
INFO: transferred 34359 MB in 1083 seconds (31 MB/s) gzip
INFO: transferred 34359 MB in 241 seconds (142 MB/s) pigz -n 4

__qmrestore__
progress 100% (read 34359738368 bytes, duration 36 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) zstd -d -T4

progress 100% (read 34359738368 bytes, duration 38 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) lzo

progress 100% (read 34359738368 bytes, duration 175 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) pigz -n 4


v2 -> v3:
* split archive_info into decompressor_info and archive_info
* "compact" regex pattern is now a constant and used in
  multiple modules
* added tests for regex matching
* bug fix for ctime of backup files

v1 -> v2:
* factored out the decompressor info first, as Thomas suggested
* made the regex pattern of backup files more compact, easier to
  read (hopefully)
* less code changes for container restores

Thanks for any comment or suggestion in advance.

[0] https://facebook.github.io/zstd/

Alwin Antreich (13):
__pve-storage__
  storage: test: split archive format/compressor
  test: parse_volname
  test: list_volumes
  Fix: backup: ctime taken from stat not file name
  test: path_to_volume_id
  backup: more compact regex for backup file filter
  Fix: #2124 storage: add zstd support

 test/Makefile  |   5 +-
 PVE/Storage.pm |  85 ++---
 PVE/Storage/Plugin.pm  |   9 +-
 test/run_parser_tests.pl   |  17 ++
 test/test_archive_info.pm  |  57 ++
 test/test_list_volumes.pm  | 313 +
 test/test_parse_volname.pm |  98 +++
 test/test_path_to_volume_id.pm | 104 +++
 8 files changed, 661 insertions(+), 27 deletions(-)
 create mode 100755 test/run_parser_tests.pl
 create mode 100644 test/test_archive_info.pm
 create mode 100644 test/test_list_volumes.pm
 create mode 100644 test/test_parse_volname.pm
 create mode 100644 test/test_path_to_volume_id.pm

__pve-container__
  Fix: #2124 add zstd

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

__qemu-server__
  restore: replace archive format/compression regex
  Fix #2124: Add support for zstd

 PVE/QemuServer.pm | 38 +++---
 1 file changed, 7 insertions(+), 31 deletions(-)

__pve-manager__
  Fix #2124: Add support for zstd

 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)

__pve-guest-common__
  Fix: #2124 add zstd support

 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common v3] Fix: #2124 add zstd support

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..9a7c4f6 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -88,7 +88,7 @@ my $confdesc = {
type => 'string',
description => "Compress dump file.",
optional => 1,
-   enum => ['0', '1', 'gzip', 'lzo'],
+   enum => ['0', '1', 'gzip', 'lzo', 'zstd'],
default => '0',
 },
 pigz=> {
@@ -98,6 +98,13 @@ my $confdesc = {
optional => 1,
default => 0,
 },
+zstd => {
+   type => "integer",
+   description => "Use zstd with N>0.".
+   " N=0 uses half of cores, N>1 uses N as thread count.",
+   optional => 1,
+   default => 1,
+},
 quiet => {
type => 'boolean',
description => "Be quiet.",
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 5/7] test: path_to_volume_id

2020-04-08 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/run_parser_tests.pl   |   2 +-
 test/test_path_to_volume_id.pm | 102 +
 2 files changed, 103 insertions(+), 1 deletion(-)
 create mode 100644 test/test_path_to_volume_id.pm

diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
index 4b1c003..635b59d 100755
--- a/test/run_parser_tests.pl
+++ b/test/run_parser_tests.pl
@@ -10,7 +10,7 @@ my $res = $harness->runtests(
 "test_archive_info.pm",
 "test_parse_volname.pm",
 "test_list_volumes.pm",
+"test_path_to_volume_id.pm",
 );
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
-
diff --git a/test/test_path_to_volume_id.pm b/test/test_path_to_volume_id.pm
new file mode 100644
index 000..e693974
--- /dev/null
+++ b/test/test_path_to_volume_id.pm
@@ -0,0 +1,102 @@
+package PVE::Storage::TestPathToVolumeId;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+
+use Test::More;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+
+my $storage_dir = getcwd() . '/test_path_to_volume_id';
+my $scfg = {
+'digest' => 'd29306346b8b25b90a4a96165f1e8f52d1af1eda',
+'ids' => {
+   'local' => {
+   'shared' => 0,
+   'path' => "$storage_dir",
+   'type' => 'dir',
+   'content' => {
+   'snippets' => 1,
+   'rootdir' => 1,
+   'images' => 1,
+   'iso' => 1,
+   'backup' => 1,
+   'vztmpl' => 1
+   },
+   'maxfiles' => 0
+   }
+},
+'order' => {
+   'local' => 1
+}
+};
+
+my @tests = (
+   [ "$storage_dir/images/16110/vm-16110-disk-0.qcow2", ['images', 
'local:16110/vm-16110-disk-0.qcow2'], 'Image, qcow2' ],
+   [ "$storage_dir/images/16112/vm-16112-disk-0.raw",   ['images', 
'local:16112/vm-16112-disk-0.raw'],   'Image, raw' ],
+   [ "$storage_dir/images/9004/base-9004-disk-0.qcow2", ['images', 
'local:9004/base-9004-disk-0.qcow2'], 'Image template, qcow2' ],
+
+   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",  
['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz'],  'Backup, 
vma.gz' ],
+   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo", 
['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo'], 'Backup, 
vma.lzo' ],
+   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma", 
['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma'], 'Backup, 
vma' ],
+   [ "$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",  
['iso', 'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo'],  'Backup, 
tar.lzo' ],
+
+   [ "$storage_dir/template/iso/yet-again-a-installation-disk.iso",
  ['iso', 'local:iso/yet-again-a-installation-disk.iso'],  'ISO 
file' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz", 
['vztmpl', 'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz'], 'CT 
template, tar.gz' ],
+
+   [ "$storage_dir/private/1234/",  ['rootdir', 
'local:rootdir/1234'],  'Rootdir' ], # fileparse needs / at the 
end
+   [ "$storage_dir/images/1234/subvol-1234-disk-0.subvol/", ['images', 
'local:1234/subvol-1234-disk-0.subvol'], 'Rootdir, folder subvol' ], # 
fileparse needs / at the end
+
+   # no matches
+   [ "$storage_dir/snippets/userconfig.yaml",  
  [''], 'Snippets, yaml' ],
+   [ "$storage_dir/snippets/hookscript.pl",
  [''], 'Snippets, hookscript' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.xz", [''], 
'CT template, tar.xz' ],
+
+   # no matches, path or files with failures
+   [ "$storage_dir/images//base-4321-disk-0.raw",  
   [''], 'Base template, string as vmid in folder name' ],
+   [ "$storage_dir/template/iso/yet-again-a-installation-disk.dvd",
   [''], 'ISO file, wrong ending' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.zip.gz",  [''], 
'CT template, wrong ending, zip.gz' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.bz2", [''], 
'CT template, wrong ending, tar bz2' ],
+   [ "$storage_dir/private/subvol-19254-disk-0/",  
   [''], 'Rootdir as subvol, wrong path' ],
+   [ "$storage_dir/dump/vzdump-openvz-16112-2020_03_30-21_39_30.tar.bz2",  
   [''], 'Backup, wrong ending, openvz, tar.bz2' ],
+   

[pve-devel] [PATCH storage v3 1/7] storage: test: split archive format/compressor

2020-04-08 Thread Alwin Antreich
detection into separate functions so they are reusable and easier
modifiable.

Signed-off-by: Alwin Antreich 
---
 test/Makefile |  5 ++-
 PVE/Storage.pm| 79 ---
 test/run_parser_tests.pl  | 12 ++
 test/test_archive_info.pm | 54 ++
 4 files changed, 128 insertions(+), 22 deletions(-)
 create mode 100755 test/run_parser_tests.pl
 create mode 100644 test/test_archive_info.pm

diff --git a/test/Makefile b/test/Makefile
index 833a597..838449f 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,6 @@
 all: test
 
-test: test_zfspoolplugin test_disklist test_bwlimit
+test: test_zfspoolplugin test_disklist test_bwlimit test_parsers
 
 test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
@@ -10,3 +10,6 @@ test_disklist: run_disk_tests.pl
 
 test_bwlimit: run_bwlimit_tests.pl
./run_bwlimit_tests.pl
+
+test_parsers: run_parser_tests.pl
+   ./run_parser_tests.pl
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 60b8310..0bbd168 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1284,6 +1284,53 @@ sub foreach_volid {
 }
 }
 
+sub decompressor_info {
+my ($format, $comp) = @_;
+
+if ($format eq 'tgz' && !defined($comp)) {
+   ($format, $comp) = ('tar', 'gz');
+}
+
+my $decompressor = {
+   tar => {
+   gz => ['tar', '-z'],
+   lzo => ['tar', '--lzop'],
+   },
+   vma => {
+   gz => ['zcat'],
+   lzo => ['lzop', '-d', '-c'],
+   },
+};
+
+die "ERROR: archive format not defined\n"
+   if !defined($decompressor->{$format});
+
+my $decomp = $decompressor->{$format}->{$comp} if $comp;
+
+my $info = {
+   format => $format,
+   compression => $comp,
+   decompressor => $decomp,
+};
+
+return $info;
+}
+
+sub archive_info {
+my ($archive) = shift;
+my $info;
+
+my $volid = basename($archive);
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+   $info = decompressor_info($2, $3);
+   $info->{type} = $1;
+} else {
+   die "ERROR: couldn't determine format and compression type\n";
+}
+
+return $info;
+}
+
 sub extract_vzdump_config_tar {
 my ($archive, $conf_re) = @_;
 
@@ -1329,16 +1376,12 @@ sub extract_vzdump_config_vma {
 };
 
 
+my $info = archive_info($archive);
+$comp //= $info->{compression};
+my $decompressor = $info->{decompressor};
+
 if ($comp) {
-   my $uncomp;
-   if ($comp eq 'gz') {
-   $uncomp = ["zcat", $archive];
-   } elsif ($comp eq 'lzo') {
-   $uncomp = ["lzop", "-d", "-c", $archive];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
-   $cmd = [$uncomp, ["vma", "config", "-"]];
+   $cmd = [ [@$decompressor, $archive], ["vma", "config", "-"] ];
 
# in some cases, lzop/zcat exits with 1 when its stdout pipe is
# closed early by vma, detect this and ignore the exit code later
@@ -1388,20 +1431,14 @@ sub extract_vzdump_config {
 }
 
 my $archive = abs_filesystem_path($cfg, $volid);
+my $info = archive_info($archive);
+my $format = $info->{format};
+my $comp = $info->{compression};
+my $type = $info->{type};
 
-if ($volid =~ 
/vzdump-(lxc|openvz)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|(tar(\.(gz|lzo))?))$/)
 {
+if ($type eq 'lxc' || $type eq 'openvz') {
return extract_vzdump_config_tar($archive, 
qr!^(\./etc/vzdump/(pct|vps)\.conf)$!);
-} elsif ($volid =~ 
/vzdump-qemu-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
-   my $format;
-   my $comp;
-   if ($7 eq 'tgz') {
-   $format = 'tar';
-   $comp = 'gz';
-   } else {
-   $format = $9;
-   $comp = $11 if defined($11);
-   }
-
+} elsif ($type eq 'qemu') {
if ($format eq 'tar') {
return extract_vzdump_config_tar($archive, 
qr!\(\./qemu-server\.conf\)!);
} else {
diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
new file mode 100755
index 000..042112c
--- /dev/null
+++ b/test/run_parser_tests.pl
@@ -0,0 +1,12 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use TAP::Harness;
+
+my $harness = TAP::Harness->new( { verbosity => -1 });
+my $res = $harness->runtests("test_archive_info.pm");
+
+exit -1 if !$res || $res->{failed} || $res->{parse_errors};
+
diff --git a/test/test_archive_info.pm b/test/test_archive_info.pm
new file mode 100644
index 000..464cc89
--- /dev/null
+++ b/test/test_archive_info.pm
@@ -0,0 +1,54 @@
+package PVE::Storage::TestArch

[pve-devel] [PATCH manager v3] Fix #2124: Add support for zstd

2020-04-08 Thread Alwin Antreich
This patch adds zstd to the compression selection for backup on the GUI
and add .zst to the backup file filter. Including zstd as package
install dependency.

Signed-off-by: Alwin Antreich 
---
 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index f3274196..e97bd817 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -609,6 +609,13 @@ sub compressor_info {
} else {
return ('gzip --rsyncable', 'gz');
}
+} elsif ($opt_compress eq 'zstd') {
+   my $zstd_threads = $opts->{zstd} // 1;
+   if ($zstd_threads == 0) {
+   my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
+   $zstd_threads = int(($cpuinfo->{cpus} + 1)/2);
+   }
+   return ("zstd --threads=${zstd_threads}", 'zst');
 } else {
die "internal error - unknown compression option '$opt_compress'";
 }
@@ -620,7 +627,7 @@ sub get_backup_file_list {
 my $bklist = [];
 foreach my $fn (<$dir/${bkname}-*>) {
next if $exclude_fn && $fn eq $exclude_fn;
-   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
 {
+   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
$fn = "$dir/$1"; # untaint
my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
push @$bklist, [$fn, $t];
@@ -928,7 +935,7 @@ sub exec_backup_task {
debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
unlink $d->[0];
my $logfn = $d->[0];
-   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
+   $logfn =~ 
s/\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?))$/\.log/;
unlink $logfn;
}
}
diff --git a/debian/control b/debian/control
index ec5267a4..4ba05c6f 100644
--- a/debian/control
+++ b/debian/control
@@ -60,6 +60,7 @@ Depends: apt-transport-https | apt (>= 1.5~),
  logrotate,
  lsb-base,
  lzop,
+ zstd,
  novnc-pve,
  pciutils,
  perl (>= 5.10.0-19),
diff --git a/www/manager6/form/CompressionSelector.js 
b/www/manager6/form/CompressionSelector.js
index 8938fc0e..e8775e71 100644
--- a/www/manager6/form/CompressionSelector.js
+++ b/www/manager6/form/CompressionSelector.js
@@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
 comboItems: [
 ['0', Proxmox.Utils.noneText],
 ['lzo', 'LZO (' + gettext('fast') + ')'],
-['gzip', 'GZIP (' + gettext('good') + ')']
+['gzip', 'GZIP (' + gettext('good') + ')'],
+['zstd', 'ZSTD (' + gettext('better') + ')']
 ]
 });
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 4/7] Fix: backup: ctime taken from stat

2020-04-08 Thread Alwin Antreich
not the file name. The vzdump file was passed with the full path to the
regex. That regex captures the time from the file name, to calculate the
epoch.

As the regex didn't match, the ctime from stat was taken instead. This
resulted in the ctime shown when the file was changed, not when the
backup was made.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/Plugin.pm |  3 ++-
 test/test_list_volumes.pm | 16 
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 8c0dae1..0ab44ce 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -952,7 +952,8 @@ my $get_subdir_files = sub {
next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
 
my $format = $2;
-   $info = { volid => "$sid:backup/$1", format => $format };
+   $fn = $1;
+   $info = { volid => "$sid:backup/$fn", format => $format };
 
if ($fn =~ 
m!^vzdump\-(?:lxc|qemu)\-(?:[1-9][0-9]{2,8})\-(\d{4})_(\d{2})_(\d{2})\-(\d{2})_(\d{2})_(\d{2})\.${format}$!)
 {
my $epoch = timelocal($6, $5, $4, $3, $2-1, $1 - 1900);
diff --git a/test/test_list_volumes.pm b/test/test_list_volumes.pm
index 4b3fdb7..169c8be 100644
--- a/test/test_list_volumes.pm
+++ b/test/test_list_volumes.pm
@@ -106,9 +106,9 @@ my @tests = (
{ 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-0.qcow2' },
{ 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-1.qcow2' },
{ 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-2.qcow2' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma' },
+   { 'content' => 'backup',   'ctime' => 1585595500,'format' => 
'vma.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz' },
+   { 'content' => 'backup',   'ctime' => 1585595565,'format' => 
'vma.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo' },
+   { 'content' => 'backup',   'ctime' => 1585595635,'format' => 
'vma', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma' },
{ 'content' => 'snippets', 'ctime' => DEFAULT_CTIME, 'format' => 
'snippet', 'size' => DEFAULT_SIZE, 'volid' => 'local:snippets/hookscript.pl' },
{ 'content' => 'snippets', 'ctime' => DEFAULT_CTIME, 'format' => 
'snippet', 'size' => DEFAULT_SIZE, 'volid' => 'local:snippets/userconfig.yaml' 
},
],
@@ -124,9 +124,9 @@ my @tests = (
],
expected => [
{ 'content' => 'rootdir',  'ctime' => DEFAULT_CTIME, 'format' => 
'raw', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16112', 'volid' => 'local:16112/vm-16112-disk-0.raw' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'tar.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'tar.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_49_30.tar.gz' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'tgz', 'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_59_30.tgz' },
+   { 'content' => 'backup',   'ctime' => 1585597170,'format' => 
'tar.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 

[pve-devel] [PATCH qemu-server v3 2/2] Fix #2124: Add support for zstd

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e5bf41b..c72ddf4 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7039,7 +7039,7 @@ sub complete_backup_archives {
 my $res = [];
 foreach my $id (keys %$data) {
foreach my $item (@{$data->{$id}}) {
-   next if $item->{format} !~ m/^vma\.(gz|lzo)$/;
+   next if $item->{format} !~ 
m/^vma\.(${\PVE::Storage::Plugin::COMPRESSOR_RE})$/;
push @$res, $item->{volid} if defined($item->{volid});
}
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 2/7] test: parse_volname

2020-04-08 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/run_parser_tests.pl   |  2 +-
 test/test_parse_volname.pm | 95 ++
 2 files changed, 96 insertions(+), 1 deletion(-)
 create mode 100644 test/test_parse_volname.pm

diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
index 042112c..79093aa 100755
--- a/test/run_parser_tests.pl
+++ b/test/run_parser_tests.pl
@@ -6,7 +6,7 @@ use warnings;
 use TAP::Harness;
 
 my $harness = TAP::Harness->new( { verbosity => -1 });
-my $res = $harness->runtests("test_archive_info.pm");
+my $res = $harness->runtests("test_archive_info.pm", "test_parse_volname.pm");
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
 
diff --git a/test/test_parse_volname.pm b/test/test_parse_volname.pm
new file mode 100644
index 000..84665d3
--- /dev/null
+++ b/test/test_parse_volname.pm
@@ -0,0 +1,95 @@
+package PVE::Storage::TestParseVolname;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my @tests = (
+# VM images
+[ '4321/base-4321-disk-0.raw/1234/vm-1234-disk-0.raw', ['images', 
'vm-1234-disk-0.raw',   '1234', 'base-4321-disk-0.raw',   '4321', undef, 
'raw'],'VM disk image, linked, raw' ],
+[ '4321/base-4321-disk-0.qcow2/1234/vm-1234-disk-0.qcow2', ['images', 
'vm-1234-disk-0.qcow2', '1234', 'base-4321-disk-0.qcow2', '4321', undef, 
'qcow2'],  'VM disk image, linked, qcow2' ],
+[ '4321/base-4321-disk-0.vmdk/1234/vm-1234-disk-0.vmdk',   ['images', 
'vm-1234-disk-0.vmdk',  '1234', 'base-4321-disk-0.vmdk',  '4321', undef, 
'vmdk'],   'VM disk image, linked, vmdk' ],
+
+[ '4321/vm-4321-disk-0.qcow2/1234/vm-1234-disk-0.qcow2',['images', 
'vm-1234-disk-0.qcow2', '1234', 'vm-4321-disk-0.qcow2', '4321', undef, 
'qcow2'], 'VM disk image, linked, qcow2, vm- as base-' ],
+
+[ '1234/vm-1234-disk-1.raw',   ['images', 'vm-1234-disk-1.raw',   '1234', 
undef, undef, undef, 'raw'],   'VM disk image, raw' ],
+[ '1234/vm-1234-disk-1.qcow2', ['images', 'vm-1234-disk-1.qcow2', '1234', 
undef, undef, undef, 'qcow2'], 'VM disk image, qcow2' ],
+[ '1234/vm-1234-disk-1.vmdk',  ['images', 'vm-1234-disk-1.vmdk',  '1234', 
undef, undef, undef, 'vmdk'],  'VM disk image, vmdk' ],
+
+[ '4321/base-4321-disk-0.raw',   ['images', 'base-4321-disk-0.raw',   
'4321', undef, undef, 'base-', 'raw'],   'VM disk image, base, raw' ],
+[ '4321/base-4321-disk-0.qcow2', ['images', 'base-4321-disk-0.qcow2', 
'4321', undef, undef, 'base-', 'qcow2'], 'VM disk image, base, qcow2' ],
+[ '4321/base-4321-disk-0.vmdk',  ['images', 'base-4321-disk-0.vmdk',  
'4321', undef, undef, 'base-', 'vmdk'],  'VM disk image, base, vmdk' ],
+
+# iso
+[ 'iso/some-installation-disk.iso', ['iso', 'some-installation-disk.iso'], 
'ISO image, iso' ],
+[ 'iso/some-other-installation-disk.img', ['iso', 
'some-other-installation-disk.img'], 'ISO image, img' ],
+
+# container templates
+[ 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz', ['vztmpl', 
'debian-10.0-standard_10.0-1_amd64.tar.gz'], 'Container template tar.gz' ],
+[ 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz', ['vztmpl', 
'debian-10.0-standard_10.0-1_amd64.tar.xz'], 'Container template tar.xz' ],
+
+# container rootdir
+[ 'rootdir/1234',   ['rootdir', '1234',
  '1234'], 'Container rootdir, sub directory' ],
+[ '1234/subvol-1234-disk-0.subvol', ['images',  
'subvol-1234-disk-0.subvol', '1234', undef, undef, undef, 'subvol'],  
'Container rootdir, subvol' ],
+
+# backup archives
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma', ['backup', 
'vzdump-qemu-16110-2020_03_30-21_12_40.vma', '16110'], 'Backup archive, 
vma' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  ['backup', 
'vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  '16110'], 'Backup archive, 
vma, gz' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', ['backup', 
'vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', '16110'], 'Backup archive, 
vma, lzo' ],
+
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar', ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tar', '16112'], 'Backup archive, lxc' 
],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.gz',  ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tar.gz',  '16112'], 'Backup archive, lxc, 
gz' ],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tgz', ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tgz', '16112'], 'Backup archive, lxc, 
tgz' ],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo', ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo', '16112'], 'Backup archive, lxc, 
lzo' ],
+
+[ 'backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar', ['backup', 
'vzdump-openvz-16112-2020_03_30-21_39

[pve-devel] [PATCH container v3] Fix: #2124 add zstd

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 9faec63..91904b6 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -123,6 +123,7 @@ sub restore_tar_archive {
'.bz2' => '-j',
'.xz'  => '-J',
'.lzo'  => '--lzop',
+   '.zst'  => '--zstd',
);
if ($archive =~ /\.tar(\.[^.]+)?$/) {
if (defined($1)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 2/2] Die on misaligned memory for hotplugging

2020-03-19 Thread Alwin Antreich
On Wed, Mar 18, 2020 at 04:18:45PM +0100, Stefan Reiter wrote:
> ...instead of booting with an invalid config once and then silently
> changing the memory size for consequent VM starts.
> 
> Signed-off-by: Stefan Reiter 
> ---
Tested-by: Alwin Antreich 

> 
> This confused me for a bit, I don't think that's very nice behaviour as it
> stands.
> 
>  PVE/QemuServer/Memory.pm | 7 ++-
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index ae9598b..b7cf5d5 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -321,11 +321,8 @@ sub config {
>   push @$cmd, "-object" , $mem_object;
>   push @$cmd, "-device", 
> "pc-dimm,id=$name,memdev=mem-$name,node=$numanode";
>  
> - #if dimm_memory is not aligned to dimm map
> - if($current_size > $memory) {
> -  $conf->{memory} = $current_size;
> -  PVE::QemuConfig->write_config($vmid, $conf);
> - }
> + die "memory size ($memory) must be aligned to $dimm_size for 
> hotplugging\n"
same nit as in my mail to path 1/2

> + if $current_size > $memory;
>   });
>  }
>  }
> -- 
> 2.25.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server 1/2] Disable memory hotplugging for custom NUMA topologies

2020-03-19 Thread Alwin Antreich
On Wed, Mar 18, 2020 at 04:18:44PM +0100, Stefan Reiter wrote:
> This cannot work, since we adjust the 'memory' property of the VM config
> on hotplugging, but then the user-defined NUMA topology won't match for
> the next start attempt.
> 
> Check needs to happen here, since it otherwise fails early with "total
> memory for NUMA nodes must be equal to vm static memory".
> 
> With this change the error message reflects what is actually happening
> and doesn't allow VMs with exactly 1GB of RAM either.
> 
> Signed-off-by: Stefan Reiter 
> ---
Tested-by: Alwin Antreich 

> 
> Came up after investigating:
> https://forum.proxmox.com/threads/task-error-total-memory-for-numa-nodes-must-be-equal-to-vm-static-memory.67251/
> 
> Spent way too much time 'fixing' it before realizing that it can never work
> like this anyway...
> 
>  PVE/QemuServer/Memory.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index d500b3b..ae9598b 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -225,6 +225,12 @@ sub config {
>  if ($hotplug_features->{memory}) {
>   die "NUMA needs to be enabled for memory hotplug\n" if !$conf->{numa};
>   die "Total memory is bigger than ${MAX_MEM}MB\n" if $memory > $MAX_MEM;
> +
> + for (my $i = 0; $i < $MAX_NUMA; $i++) {
> + die "cannot enable memory hotplugging with custom NUMA topology\n"
s/hotplugging/hotplug/ or s/hotplugging/hot plugging/
The word hotplugging doesn't seem to exist in the dictionaries.

> + if $conf->{"numa$i"};
> + }
> +
>   my $sockets = 1;
>   $sockets = $conf->{sockets} if $conf->{sockets};
>  
> -- 
> 2.25.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/3] ceph: remove unused variable assignment

2020-03-11 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Ceph/Services.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
index c17008cf..7015cafe 100644
--- a/PVE/Ceph/Services.pm
+++ b/PVE/Ceph/Services.pm
@@ -63,7 +63,6 @@ sub get_cluster_service {
 sub ceph_service_cmd {
 my ($action, $service) = @_;
 
-my $pve_ceph_cfgpath = PVE::Ceph::Tools::get_config('pve_ceph_cfgpath');
 if ($service && $service =~ 
m/^(mon|osd|mds|mgr|radosgw)(\.(${\SERVICE_REGEX}))?$/) {
$service = defined($3) ? "ceph-$1\@$3" : "ceph-$1.target";
 } else {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/3] Fix #2422: allow multiple Ceph public networks

2020-03-11 Thread Alwin Antreich
Multiple public networks can be defined in the ceph.conf. The networks
need to be routed to each other.

On first service start the Ceph MON will register itself with one of the
IPs configured locally, matching one of the public networks defined in
the ceph.conf.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph/MON.pm | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index 3baeac52..5128fea2 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -33,11 +33,19 @@ my $find_mon_ip = sub {
 }
 $pubnet //= $cfg->{global}->{public_network};
 
+my $public_nets = [ PVE::Tools::split_list($pubnet) ];
+warn "Multiple ceph public networks detected on $node: $pubnet\n".
+"Networks must be capable of routing to each other.\n" if 
scalar(@$public_nets) > 1;
+
 if (!$pubnet) {
return $overwrite_ip // PVE::Cluster::remote_node_ip($node);
 }
 
-my $allowed_ips = PVE::Network::get_local_ip_from_cidr($pubnet);
+my $allowed_ips;
+foreach my $net (@$public_nets) {
+push @$allowed_ips, @{ PVE::Network::get_local_ip_from_cidr($net) };
+}
+
 die "No active IP found for the requested ceph public network '$pubnet' on 
node '$node'\n"
if scalar(@$allowed_ips) < 1;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/3] Fix: ceph: mon_address not considered by new MON

2020-03-11 Thread Alwin Antreich
The public_addr option for creating a new MON is only valid for manual
startup (since Ceph Jewel) and is just ignored by ceph-mon during setup.
As the MON is started after the creation through systemd without an IP
specified. It is trying to auto-select an IP.

Before this patch the public_addr was only explicitly written to the
ceph.conf if no public_network was set. The mon_address is only needed
in the config on the first start of the MON.

The ceph-mon itself tries to select an IP on the following conditions.
- no public_network or public_addr is in the ceph.conf
* startup fails

- public_network is in the ceph.conf
* with a single network, take the first available IP
* on multiple networks, walk through the list orderly and start on
  the first network where an IP is found

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph/MON.pm | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index 18b563c9..3baeac52 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -255,7 +255,7 @@ __PACKAGE__->register_method ({
run_command("monmaptool --create --clobber --addv 
$monid '[v2:$monaddr:3300,v1:$monaddr:6789]' --print $monmap");
}
 
-   run_command("ceph-mon --mkfs -i $monid --monmap $monmap 
--keyring $mon_keyring --public-addr $ip");
+   run_command("ceph-mon --mkfs -i $monid --monmap $monmap 
--keyring $mon_keyring");
run_command("chown ceph:ceph -R $mondir");
};
my $err = $@;
@@ -275,11 +275,8 @@ __PACKAGE__->register_method ({
}
$monhost .= " $ip";
$cfg->{global}->{mon_host} = $monhost;
-   if (!defined($cfg->{global}->{public_network})) {
-   # if there is no info about the public_network
-   # we have to set it explicitly for the monitor
-   $cfg->{$monsection}->{public_addr} = $ip;
-   }
+   # The IP is needed in the ceph.conf for the first boot
+   $cfg->{$monsection}->{public_addr} = $ip;
 
cfs_write_file('ceph.conf', $cfg);
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] pveceph: reword memory precondition

2020-02-17 Thread Alwin Antreich
and add the memory target for OSDs, included since Luminous. As well as
distinguish the memory usage between the OSD backends.

Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index b3bbadf..8dc8568 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -86,9 +86,16 @@ provide enough resources for stable and durable Ceph 
performance.
 .Memory
 Especially in a hyper-converged setup, the memory consumption needs to be
 carefully monitored. In addition to the intended workload from virtual machines
-and container, Ceph needs enough memory available to provide good and stable
-performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
-will be used by an OSD. OSD caching will use additional memory.
+and containers, Ceph needs enough memory available to provide excellent and
+stable performance.
+
+As a rule of thumb, for roughly **1 TiB of data, 1 GiB of memory** will be used
+by an OSD. Especially during recovery, rebalancing or backfilling.
+
+The daemon itself will use additional memory. The Bluestore backend of the
+daemon requires by default **3-5 GiB of memory** (adjustable). In contrast, the
+legacy Filestore backend uses the OS page cache and the memory consumption is
+generally related to PGs of an OSD daemon.
 
 .Network
 We recommend a network bandwidth of at least 10 GbE or more, which is used
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option

2020-02-05 Thread Alwin Antreich
On Thu, Nov 14, 2019 at 03:01:37PM +0100, Thomas Lamprecht wrote:
> On 11/14/19 6:30 AM, Dietmar Maurer wrote:
> >> The main reason for this is to identify backups residing on an old backup 
> >> store like an archive.
> >>  
> >> But I am open. Would you prefer having a manifest included in the archive 
> >> or as a separate file on the same storage?
> > 
> > The backup archive already contains the full VM config. I thought the 
> > manifest should be
> > an extra file on the same storage.
> > 
> 
> An idea for the backup note/description feature request is to have
> a simple per backup file where that info is saved, having the same
> base name as the backup archive and the log, so those can easily get
> moved/copied around all at once by using an extension glob for the
> file ending.
> 
> Simple manifest works too, needs to always have the cluster storage
> lock though, whereas a per backup file could do with a vmid based one
> (finer granularity avoids lock contention). Also it makes it less easier
> to copy a whole archive to another storage/folder.
If I didn't miss an email, then this feature request (#438 [0]) seems to
be still open (I'm the assignee).

In which direction should this feature go? Per backup manifest?

Or maybe extending the vzdump CLI with an info command that displays
some information, parsed from the backup logfile itself. Since the VM/CT
name is already in the log. Would that be a possibility too?

Example form a backup logfiles:
```
2020-02-04 15:58:55 INFO: VM Name: testvm
2020-01-13 15:39:35 INFO: CT Name: test
```

[0] https://bugzilla.proxmox.com/show_bug.cgi?id=438

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager v2 1/2] Fix #2124: Add support for zstd

2020-02-04 Thread Alwin Antreich
On Mon, Feb 03, 2020 at 05:51:38PM +0100, Stefan Reiter wrote:
> On 1/31/20 5:00 PM, Alwin Antreich wrote:
> > Adds the zstd to the compression selection for backup on the GUI and the
> > .zst extension to the backup file filter.
> > 
> > Signed-off-by: Alwin Antreich 
> > ---
> > 
> >   PVE/VZDump.pm| 6 --
> >   www/manager6/form/CompressionSelector.js | 3 ++-
> >   2 files changed, 6 insertions(+), 3 deletions(-)
> > 
> > diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
> > index 3caa7ab8..21032fd6 100644
> > --- a/PVE/VZDump.pm
> > +++ b/PVE/VZDump.pm
> > @@ -592,6 +592,8 @@ sub compressor_info {
> > } else {
> > return ('gzip --rsyncable', 'gz');
> > }
> > +} elsif ($opt_compress eq 'zstd') {
> > +   return ('zstd', 'zst');
> 
> Did some testing, two things I noticed, first one regarding this patch
> especially:
> 
> 1) By default zstd uses only one core. I feel like this should be increased
> (or made configurable as with pigz?). Also, zstd has an '--rsyncable flag'
> like gzip, might be good to include that too (according to the man page it
> only has a 'negligible impact on compression ratio').
Thanks for spotting, I put this into my v3.

> 
> 2) The task log is somewhat messed up... It seems zstd prints a status as
> well, additionally to our own progress meter:
True, I will silence the output. In turn, this makes it also similar to
the lzo,gzip compression output.

> 
> 
> _03-17_05_09.vma.zst : 13625 MB... progress 94% (read 32298172416 bytes,
> duration 34 sec)
> 
> _03-17_05_09.vma.zst : 13668 MB...
> _03-17_05_09.vma.zst : 13721 MB...
> _03-17_05_09.vma.zst : 13766 MB...
> _03-17_05_09.vma.zst : 13821 MB...
> _03-17_05_09.vma.zst : 13869 MB...
> _03-17_05_09.vma.zst : 13933 MB... progress 95% (read 32641777664 bytes,
> duration 35 sec)
> 
> _03-17_05_09.vma.zst : 14014 MB...
> _03-17_05_09.vma.zst : 14091 MB...
> 
> 
> Looks a bit unsightly IMO.
> 
> But functionality wise it works fine, tried with a VM and a container, so
> 
> Tested-by: Stefan Reiter 
Thanks for testing.

> 
> for the series.
> 
> >   } else {
> > die "internal error - unknown compression option '$opt_compress'";
> >   }
> > @@ -603,7 +605,7 @@ sub get_backup_file_list {
> >   my $bklist = [];
> >   foreach my $fn (<$dir/${bkname}-*>) {
> > next if $exclude_fn && $fn eq $exclude_fn;
> > -   if ($fn =~ 
> > m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
> >  {
> > +   if ($fn =~ 
> > m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!)
> >  {
> > $fn = "$dir/$1"; # untaint
> > my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
> > push @$bklist, [$fn, $t];
> > @@ -863,7 +865,7 @@ sub exec_backup_task {
> > debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
> > unlink $d->[0];
> > my $logfn = $d->[0];
> > -   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
> > +   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo|zst))?))$/\.log/;
> > unlink $logfn;
> > }
> > }
> > diff --git a/www/manager6/form/CompressionSelector.js 
> > b/www/manager6/form/CompressionSelector.js
> > index 8938fc0e..e8775e71 100644
> > --- a/www/manager6/form/CompressionSelector.js
> > +++ b/www/manager6/form/CompressionSelector.js
> > @@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
> >   comboItems: [
> >   ['0', Proxmox.Utils.noneText],
> >   ['lzo', 'LZO (' + gettext('fast') + ')'],
> > -['gzip', 'GZIP (' + gettext('good') + ')']
> > +['gzip', 'GZIP (' + gettext('good') + ')'],
> > +['zstd', 'ZSTD (' + gettext('better') + ')']
> >   ]
> >   });
> > 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 2/2] Fix #2124: Add zstd pkg as install dependency

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index bcc6bb6e..497395da 100644
--- a/debian/control
+++ b/debian/control
@@ -60,6 +60,7 @@ Depends: apt-transport-https | apt (>= 1.5~),
  logrotate,
  lsb-base,
  lzop,
+ zstd,
  novnc-pve,
  pciutils,
  perl (>= 5.10.0-19),
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 1/2] restore: replace archive regex

2020-01-31 Thread Alwin Antreich
to reduce the code duplication, as archive_info provides the same
information as well.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 36 ++--
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7374bf1..ff7dcab 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5879,28 +5879,9 @@ sub tar_restore_cleanup {
 sub restore_archive {
 my ($archive, $vmid, $user, $opts) = @_;
 
-my $format = $opts->{format};
-my $comp;
-
-if ($archive =~ m/\.tgz$/ || $archive =~ m/\.tar\.gz$/) {
-   $format = 'tar' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.tar$/) {
-   $format = 'tar' if !$format;
-} elsif ($archive =~ m/.tar.lzo$/) {
-   $format = 'tar' if !$format;
-   $comp = 'lzop';
-} elsif ($archive =~ m/\.vma$/) {
-   $format = 'vma' if !$format;
-} elsif ($archive =~ m/\.vma\.gz$/) {
-   $format = 'vma' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.vma\.lzo$/) {
-   $format = 'vma' if !$format;
-   $comp = 'lzop';
-} else {
-   $format = 'vma' if !$format; # default
-}
+my $info = PVE::Storage::archive_info($archive);
+my $format = $opts->{format} // $info->{format};
+my $comp = $info->{compression};
 
 # try to detect archive format
 if ($format eq 'tar') {
@@ -6212,14 +6193,9 @@ sub restore_vma_archive {
 }
 
 if ($comp) {
-   my $cmd;
-   if ($comp eq 'gzip') {
-   $cmd = ['zcat', $readfrom];
-   } elsif ($comp eq 'lzop') {
-   $cmd = ['lzop', '-d', '-c', $readfrom];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
+   my $info = PVE::Storage::archive_info(undef, $comp, 'vma');
+   my $cmd = $info->{decompressor};
+   push @$cmd, $readfrom;
$add_pipe->($cmd);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server v2 2/2] Fix #2124: Add support for zstd

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ff7dcab..8af1cb6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7217,7 +7217,7 @@ sub complete_backup_archives {
 my $res = [];
 foreach my $id (keys %$data) {
foreach my $item (@{$data->{$id}}) {
-   next if $item->{format} !~ m/^vma\.(gz|lzo)$/;
+   next if $item->{format} !~ m/^vma\.(gz|lzo|zst)$/;
push @$res, $item->{volid} if defined($item->{volid});
}
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager v2 1/2] Fix #2124: Add support for zstd

2020-01-31 Thread Alwin Antreich
Adds the zstd to the compression selection for backup on the GUI and the
.zst extension to the backup file filter.

Signed-off-by: Alwin Antreich 
---

 PVE/VZDump.pm| 6 --
 www/manager6/form/CompressionSelector.js | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 3caa7ab8..21032fd6 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -592,6 +592,8 @@ sub compressor_info {
} else {
return ('gzip --rsyncable', 'gz');
}
+} elsif ($opt_compress eq 'zstd') {
+   return ('zstd', 'zst');
 } else {
die "internal error - unknown compression option '$opt_compress'";
 }
@@ -603,7 +605,7 @@ sub get_backup_file_list {
 my $bklist = [];
 foreach my $fn (<$dir/${bkname}-*>) {
next if $exclude_fn && $fn eq $exclude_fn;
-   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
 {
+   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!)
 {
$fn = "$dir/$1"; # untaint
my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
push @$bklist, [$fn, $t];
@@ -863,7 +865,7 @@ sub exec_backup_task {
debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
unlink $d->[0];
my $logfn = $d->[0];
-   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
+   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo|zst))?))$/\.log/;
unlink $logfn;
}
}
diff --git a/www/manager6/form/CompressionSelector.js 
b/www/manager6/form/CompressionSelector.js
index 8938fc0e..e8775e71 100644
--- a/www/manager6/form/CompressionSelector.js
+++ b/www/manager6/form/CompressionSelector.js
@@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
 comboItems: [
 ['0', Proxmox.Utils.noneText],
 ['lzo', 'LZO (' + gettext('fast') + ')'],
-['gzip', 'GZIP (' + gettext('good') + ')']
+['gzip', 'GZIP (' + gettext('good') + ')'],
+['zstd', 'ZSTD (' + gettext('better') + ')']
 ]
 });
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2 1/3] compact regex for backup file filter

2020-01-31 Thread Alwin Antreich
this, more compact form of the regex should allow easier addition of new
file extensions.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 2 +-
 PVE/Storage/Plugin.pm | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0bd103e..1688077 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -514,7 +514,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!) {
my $name = $1;
return ('iso', "$sid:backup/$name");
}
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 0c39cbd..58a801a 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -423,7 +423,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo)))$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(\.(tgz|((tar|vma)(\.(gz|lzo))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -910,7 +910,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
+   next if $fn !~ m!/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!;
 
$info = { volid => "$sid:backup/$1", format => $2 };
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common v2] Fix: #2124 add zstd support

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/VZDump/Common.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..a661552 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -88,7 +88,7 @@ my $confdesc = {
type => 'string',
description => "Compress dump file.",
optional => 1,
-   enum => ['0', '1', 'gzip', 'lzo'],
+   enum => ['0', '1', 'gzip', 'lzo', 'zstd'],
default => '0',
 },
 pigz=> {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2 2/3] storage: merge archive format/compressor

2020-01-31 Thread Alwin Antreich
detection into a separate function to reduce code duplication and allow
for easier modification.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm | 78 --
 1 file changed, 57 insertions(+), 21 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 1688077..390b343 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1265,6 +1265,52 @@ sub foreach_volid {
 }
 }
 
+sub archive_info {
+my ($archive, $comp, $format) = @_;
+my $type;
+
+if (!defined($comp) || !defined($format)) {
+   my $volid = basename($archive);
+   if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
+   $type = $1;
+
+   if ($8 eq 'tgz') {
+   $format = 'tar';
+   $comp = 'gz';
+   } else {
+   $format = $10;
+   $comp = $12 if defined($12);
+   }
+   } else {
+   die "ERROR: couldn't determine format and compression type\n";
+   }
+}
+
+my $decompressor = {
+   gz  => {
+   'vma' => [ "zcat", $archive ],
+   'tar' => [ "tar", "-z", $archive ],
+   },
+   lzo => {
+   'vma' => [ "lzop", "-d", "-c", $archive ],
+   'tar' => [ "tar", "--lzop", $archive ],
+   },
+};
+
+my $info;
+$info->{'format'} = $format;
+$info->{'type'} = $type;
+$info->{'compression'} = $comp;
+
+if (defined($comp) && defined($format)) {
+   my $dcomp = $decompressor->{$comp}->{$format};
+   pop(@$dcomp) if !defined($archive);
+   $info->{'decompressor'} = $dcomp;
+}
+
+return $info;
+}
+
 sub extract_vzdump_config_tar {
 my ($archive, $conf_re) = @_;
 
@@ -1310,16 +1356,12 @@ sub extract_vzdump_config_vma {
 };
 
 
+my $info = archive_info($archive);
+$comp //= $info->{compression};
+my $decompressor = $info->{decompressor};
+
 if ($comp) {
-   my $uncomp;
-   if ($comp eq 'gz') {
-   $uncomp = ["zcat", $archive];
-   } elsif ($comp eq 'lzo') {
-   $uncomp = ["lzop", "-d", "-c", $archive];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
-   $cmd = [$uncomp, ["vma", "config", "-"]];
+   $cmd = [ $decompressor, ["vma", "config", "-"] ];
 
# in some cases, lzop/zcat exits with 1 when its stdout pipe is
# closed early by vma, detect this and ignore the exit code later
@@ -1360,20 +1402,14 @@ sub extract_vzdump_config {
 my ($cfg, $volid) = @_;
 
 my $archive = abs_filesystem_path($cfg, $volid);
+my $info = archive_info($archive);
+my $format = $info->{format};
+my $comp = $info->{compression};
+my $type = $info->{type};
 
-if ($volid =~ 
/vzdump-(lxc|openvz)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|(tar(\.(gz|lzo))?))$/)
 {
+if ($type eq 'lxc' || $type eq 'openvz') {
return extract_vzdump_config_tar($archive, 
qr!^(\./etc/vzdump/(pct|vps)\.conf)$!);
-} elsif ($volid =~ 
/vzdump-qemu-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
-   my $format;
-   my $comp;
-   if ($7 eq 'tgz') {
-   $format = 'tar';
-   $comp = 'gz';
-   } else {
-   $format = $9;
-   $comp = $11 if defined($11);
-   }
-
+} elsif ($type eq 'qemu') {
if ($format eq 'tar') {
return extract_vzdump_config_tar($archive, 
qr!\(\./qemu-server\.conf\)!);
} else {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v2 3/3] Fix: #2124 storage: add zstd support

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 10 +++---
 PVE/Storage/Plugin.pm |  4 ++--
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index bf12634..51c8bc9 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -514,7 +514,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!) {
my $name = $1;
return ('iso', "$sid:backup/$name");
}
@@ -1271,7 +1271,7 @@ sub archive_info {
 
 if (!defined($comp) || !defined($format)) {
my $volid = basename($archive);
-   if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
+   if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo|zst))?))$/)
 {
$type = $1;
 
if ($8 eq 'tgz') {
@@ -1295,6 +1295,10 @@ sub archive_info {
'vma' => [ "lzop", "-d", "-c", $archive ],
'tar' => [ "tar", "--lzop", $archive ],
},
+   zst => {
+   'vma' => [ "zstd", "-d", "-c", $archive ],
+   'tar' => [ "tar", "--zstd", $archive ],
+   },
 };
 
 my $info;
@@ -1369,7 +1373,7 @@ sub extract_vzdump_config_vma {
my $errstring;
my $err = sub {
my $output = shift;
-   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/) {
+   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error : Broken 
pipe/) {
$broken_pipe = 1;
} elsif (!defined ($errstring) && $output !~ m/^\s*$/) {
$errstring = "Failed to extract config from VMA archive: 
$output\n";
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 58a801a..c300c58 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -423,7 +423,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tgz|((tar|vma)(\.(gz|lzo))?$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(\.(tgz|((tar|vma)(\.(gz|lzo|zst))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -910,7 +910,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ m!/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!;
+   next if $fn !~ m!/([^/]+\.(tgz|((tar|vma)(\.(gz|lzo|zst))?)))$!;
 
$info = { volid => "$sid:backup/$1", format => $2 };
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v2] Fix #2124: Add support for zstd

2020-01-31 Thread Alwin Antreich
This seems to me as a totally new try, since so much time has passed. :)

Zstandard (zstd) [0] is a data compression algorithm, in addition to gzip,
lzo for our backup/restore.

v1 -> v2:
* factored out the decompressor info first, as Thomas suggested
* made the regex pattern of backup files more compact, easier to
  read (hopefully)
* less code changes for container restores

Thanks for any comment or suggestion in advance.

[0] https://facebook.github.io/zstd/


__pve-container__

Alwin Antreich (1):
  Fix: #2124 add zstd support

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)


__qemu-server__

Alwin Antreich (2):
  restore: replace archive format/compression regex
  Fix #2124: Add support for zstd

 PVE/QemuServer.pm | 38 +++---
 1 file changed, 7 insertions(+), 31 deletions(-)


__pve-storage__

Alwin Antreich (3):
  backup: more compact regex for backup file filter
  storage: merge archive format/compressor detection
  Fix: #2124 storage: add zstd support

 PVE/Storage.pm| 86 +++
 PVE/Storage/Plugin.pm |  4 +-
 2 files changed, 65 insertions(+), 25 deletions(-)


__pve-guest-common__

Alwin Antreich (1):
  Fix: #2124 add zstd support

 PVE/VZDump/Common.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


__pve-manager__

Alwin Antreich (2):
  Fix #2124: Add support for zstd
  Fix #2124: Add zstd pkg as install dependency

 PVE/VZDump.pm| 6 --
 debian/control   | 1 +
 www/manager6/form/CompressionSelector.js | 3 ++-
 3 files changed, 7 insertions(+), 3 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v2] Fix: #2124 add zstd support

2020-01-31 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
v1 -> v2: less code changes for container restores

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index c13f30d..65d5068 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -79,6 +79,7 @@ sub restore_archive {
'.bz2' => '-j',
'.xz'  => '-J',
'.lzo'  => '--lzop',
+   '.zst'  => '--zstd',
);
if ($archive =~ /\.tar(\.[^.]+)?$/) {
if (defined($1)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container v2] Fix: fsck: rbd volume not mapped

2020-01-17 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
V1 -> V2: run unmap only if it has a storage id.

 src/PVE/CLI/pct.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 98e2c6e..ec071c5 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -247,7 +247,7 @@ __PACKAGE__->register_method ({
die "unable to run fsck for '$volid' (format == $format)\n"
if $format ne 'raw';
 
-   $path = PVE::Storage::path($storage_cfg, $volid);
+   $path = PVE::Storage::map_volume($storage_cfg, $volid);
 
} else {
if (($volid =~ m|^/.+|) && (-b $volid)) {
@@ -264,6 +264,7 @@ __PACKAGE__->register_method ({
die "cannot run fsck on active container\n";
 
PVE::Tools::run_command($command);
+   PVE::Storage::unmap_volume($storage_cfg, $volid) if $storage_id;
};
 
PVE::LXC::Config->lock_config($vmid, $do_fsck);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH container] Fix: fsck: rbd volume not mapped

2020-01-13 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/CLI/pct.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 98e2c6e..9dee68d 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -247,7 +247,7 @@ __PACKAGE__->register_method ({
die "unable to run fsck for '$volid' (format == $format)\n"
if $format ne 'raw';
 
-   $path = PVE::Storage::path($storage_cfg, $volid);
+   $path = PVE::Storage::map_volume($storage_cfg, $volid);
 
} else {
if (($volid =~ m|^/.+|) && (-b $volid)) {
@@ -264,6 +264,7 @@ __PACKAGE__->register_method ({
die "cannot run fsck on active container\n";
 
PVE::Tools::run_command($command);
+   PVE::Storage::unmap_volume($storage_cfg, $volid);
};
 
PVE::LXC::Config->lock_config($vmid, $do_fsck);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ceph 1/2] fix Ceph version string handling

2020-01-09 Thread Alwin Antreich
Hello Martin,

On Thu, Jan 09, 2020 at 02:23:07PM +0100, Martin Verges wrote:
> Hello Thomas,
> 
> 1. we provide 14.2.5, the latest stable release available (we have upcoming
> > 14.2.6 already on track)
> >
> 
> Good to know, does not seem to be a knowledge that some proxmox users have
> nor was it the case in the past.
I hope this knowledge spreads (it's hardly hidden). ;)
A good point to start from is the release notes and our documentation [0].

I guess the past needs a little clarification (Debian Stretch + Mimic),
Fabian had a discussion upstream [1]. And as there was no supported way
to build available. We decided at this point to not burden our users
with an experimental build of Ceph. Which would also have changed
fundamental parts of the OS (eg. glibc).

> 
> 
> > If you have custom patches which improve the experience I'd suggest
> > up-streaming them to Ceph or, if they affect our management tooling for
> > ceph, telling us here or at bugzilla.proxmox.com and/or naturally
> > up-streaming them to PVE.
> >
> 
> As a founding member of the Ceph foundation, we always provide all patches
> to the Ceph upstream and as always they will be included in future releases
> of Ceph or backported to older versions.
Thanks.

> 
> The Ceph integration from a client perspective should work as with every
> > other
> > "external" ceph server setup. IMO, it makes no sense to mix our management
> > interface for Ceph with externally untested builds. We sync releases of
> > Ceph
> > on our side with releases of the management stack, that would be
> > circumvented
> > completely, as would be the testing of the Ceph setup.
> >
> > If people want to use croit that's naturally fine for us, they can use the
> > croit managed ceph cluster within PVE instances as RBD or CephFS client
> > just
> > fine, as it is and was always the case. But, mixing croit packages with PVE
> > management makes not much sense to me, I'm afraid.
> >
> 
> I agree that user should stick to the versions a vendor provides, in your
> case the proxmox Ceph versions. But as I already wrote, we get a lot of
> proxmox users on our table that use proxmox and Ceph and some seem to have
> an issue.
I urge those users to also speak to us. If we don't know about possible
issues, then we can't help.

> 
> As my fix does not affect any proxmox functionality in a negative way, no
> will it break anything. Why would you hesitate to allow users to choose the
> Ceph versions of their liking? It just enables proxmox to don't break on
> such versions.
Proxmox VE's Ceph management is written explicitly for the
hyper-converged use case. This intent binds the management of Ceph to
the Proxmox VE clustered nodes and not to a separate Ceph cluster.

We provide packages specifically tested on Proxmox VE. And for its use
case, as Ceph client or cluster (RBD/CephFS services).

As user, using packages provided by a third party circumvents our
testing, possibly breaks usage (e.g., API/CLI changes) and in the end,
the user may be left with an installation in an unknown state.

When you use Proxmox VE as a client, the dashboard (or CLI) should not
be used.  Only due to the nature of Ceph's commands, some functionality
is working on the dashboard. For sure, this separation could be made
more visible.

I hopefully this explains, why we are currently against applying this
patch of yours.

--
Cheers,
Alwin

[0] https://pve.proxmox.com/wiki/Roadmap

https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_package_repositories_ceph
https://pve.proxmox.com/pve-docs/chapter-pveceph.html

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-June/027366.html

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] API: OSD: Fix #2496 Check OSD Network

2019-12-13 Thread Alwin Antreich
Some comments inline.

On Fri, Dec 13, 2019 at 03:56:42PM +0100, Aaron Lauterer wrote:
> It's possible to have a situation where the cluster network (used for
> inter-OSD traffic) is not configured on a node. The OSD can still be
> created but can't communicate.
> 
> This check will abort the creation if there is no IP within the subnet
> of the cluster network present on the node. If there is no dedicated
> cluster network the public network is used. The chances of that not
> being configured is much lower but better be on the safe side and check
> it if there is no cluster network.
> 
> Signed-off-by: Aaron Lauterer 
> ---
>  PVE/API2/Ceph/OSD.pm | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
> index 5f70cf58..59cc9567 100644
> --- a/PVE/API2/Ceph/OSD.pm
> +++ b/PVE/API2/Ceph/OSD.pm
> @@ -275,6 +275,14 @@ __PACKAGE__->register_method ({
>   # extract parameter info and fail if a device is set more than once
>   my $devs = {};
>  
> + my $ceph_conf = cfs_read_file('ceph.conf');
The public/cluster networks could have been migrated into the MON DB. In
this case they would not appear in the ceph.conf.

ATM it might be unlikely, there is an ugly warning, with every command
execution. But still possible.
```
Configuration option 'cluster_network' may not be modified at runtime
```

> +
> + # check if network is configured
> + my $osd_network = $ceph_conf->{global}->{cluster_network}
> + // $ceph_conf->{global}->{public_network};
An OSD needs both networks. Public for communication with the MONS &
clients. And the cluster network for replication. On our default setup,
it's both the same network.

I have tested the OSD creation with the cluster network down. During
creation, it only needs the public network to create the OSD on the MON.
But the OSD can't start and therefore isn't placed on the CRUSH map.
Once it can start, it will be added to the correct location on the map.

IMHO, the code needs to check both.

> + die "No network interface configured for subnet $osd_network. Check ".
> + "your network config.\n" if 
> !@{PVE::Network::get_local_ip_from_cidr($osd_network)};
> +
>   # FIXME: rename params on next API compatibillity change (7.0)
>   $param->{wal_dev_size} = delete $param->{wal_size};
>   $param->{db_dev_size} = delete $param->{db_size};
> @@ -330,7 +338,6 @@ __PACKAGE__->register_method ({
>   my $fsid = $monstat->{monmap}->{fsid};
>  $fsid = $1 if $fsid =~ m/^([0-9a-f\-]+)$/;
>  
> - my $ceph_conf = cfs_read_file('ceph.conf');
>   my $ceph_bootstrap_osd_keyring = 
> PVE::Ceph::Tools::get_config('ceph_bootstrap_osd_keyring');
>  
>   if (! -f $ceph_bootstrap_osd_keyring && 
> $ceph_conf->{global}->{auth_client_required} eq 'cephx') {
> -- 
> 2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH proxmox-ve] Update kernel links for install CD (rescue boot)

2019-12-03 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
Note: Thanks to Stoiko, he build an ISO to test the patch.
  This works with LVM based installs, but fails currently for ZFS
  with "Compression algorithm inherit not supported. Unable to find
  bootdisk automatically"

 debian/postinst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/debian/postinst b/debian/postinst
index 1e17a89..a0d88f6 100755
--- a/debian/postinst
+++ b/debian/postinst
@@ -7,8 +7,8 @@ case "$1" in
   configure)
 # setup kernel links for installation CD (rescue boot)
 mkdir -p /boot/pve
-ln -sf /boot/pve/vmlinuz-5.0 /boot/pve/vmlinuz
-ln -sf /boot/pve/initrd.img-5.0 /boot/pve/initrd.img
+ln -sf /boot/pve/vmlinuz-5.3 /boot/pve/vmlinuz
+ln -sf /boot/pve/initrd.img-5.3 /boot/pve/initrd.img
 ;;
 esac
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] ceph: Create symlink on standalone MGR creation

2019-12-03 Thread Alwin Antreich
Ceph MGR fails to start when installed on a node without existing
symlink to /etc/pve/ceph.conf.

Signed-off-by: Alwin Antreich 
---
 PVE/API2/Ceph/MGR.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/API2/Ceph/MGR.pm b/PVE/API2/Ceph/MGR.pm
index d3d86c0d..ffae7495 100644
--- a/PVE/API2/Ceph/MGR.pm
+++ b/PVE/API2/Ceph/MGR.pm
@@ -108,6 +108,7 @@ __PACKAGE__->register_method ({
 
PVE::Ceph::Tools::check_ceph_installed('ceph_mgr');
PVE::Ceph::Tools::check_ceph_inited();
+   PVE::Ceph::Tools::setup_pve_symlinks();
 
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] Fix: pveceph: spelling in section Trim/Discard

2019-11-07 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 99c610a..122f063 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -762,9 +762,9 @@ Trim/Discard
 
 It is a good measure to run 'fstrim' (discard) regularly on VMs or containers.
 This releases data blocks that the filesystem isn’t using anymore. It reduces
-data usage and the resource load. Most modern operating systems issue such
-discard commands to their disks regurarly. You only need to ensure that the
-Virtual Machines enable the xref:qm_hard_disk_discard[disk discard option].
+data usage and resource load. Most modern operating systems issue such discard
+commands to their disks regularly. You only need to ensure that the Virtual
+Machines enable the xref:qm_hard_disk_discard[disk discard option].
 
 [[pveceph_scrub]]
 Scrub & Deep Scrub
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] Fix: pveceph: broken ref anchor pveceph_mgr_create

2019-11-07 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 pveceph.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index ef257ac..99c610a 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -286,7 +286,7 @@ monitor the cluster. Since the Ceph luminous release at 
least one ceph-mgr
 footnote:[Ceph Manager https://docs.ceph.com/docs/{ceph_codename}/mgr/] daemon 
is
 required.
 
-[i[pveceph_create_mgr]]
+[[pveceph_create_mgr]]
 Create Manager
 ~~
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] qm: spice foldersharing: Add experimental warning

2019-11-06 Thread Alwin Antreich
On Wed, Nov 06, 2019 at 03:45:26PM +0100, Aaron Lauterer wrote:
> Hmm, What about:
> 
> Currently this feature does not work reliably.
> 
> 
> On 11/6/19 3:29 PM, Alwin Antreich wrote:
> > On Wed, Nov 06, 2019 at 03:20:59PM +0100, Aaron Lauterer wrote:
> > > Signed-off-by: Aaron Lauterer 
> > > ---
> > >   qm.adoc | 2 ++
> > >   1 file changed, 2 insertions(+)
> > > 
> > > diff --git a/qm.adoc b/qm.adoc
> > > index 9ee4460..c0fe892 100644
> > > --- a/qm.adoc
> > > +++ b/qm.adoc
> > > @@ -856,6 +856,8 @@ Select the folder to share and then enable the 
> > > checkbox.
> > >   NOTE: Folder sharing currently only works in the Linux version of 
> > > Virt-Viewer.
> > > +CAUTION: Experimental! This feature does not work reliably.
> > Maybe use a s/reliably/reliably yet/ to inidicate that this might change
> > in the future?
+1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH docs] qm: spice foldersharing: Add experimental warning

2019-11-06 Thread Alwin Antreich
On Wed, Nov 06, 2019 at 03:20:59PM +0100, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer 
> ---
>  qm.adoc | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 9ee4460..c0fe892 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -856,6 +856,8 @@ Select the folder to share and then enable the checkbox.
>  
>  NOTE: Folder sharing currently only works in the Linux version of 
> Virt-Viewer.
>  
> +CAUTION: Experimental! This feature does not work reliably.
Maybe use a s/reliably/reliably yet/ to inidicate that this might change
in the future?


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  1   2   3   4   >