Re: [pve-devel] Kernel 5.6/5.7 in Proxmox (Or - what is the lead-time for kernels to ship in Proxmox).

2020-04-08 Thread Victor Hooi
Hi,

It seems there's there's were performance improvements and bugfixes for AMD
EPYC in Linux 5.4:

https://www.phoronix.com/scan.php?page=news_item=Linux-5.4-Improve-EPYC-Balance
https://www.phoronix.com/scan.php?page=news_item=AMD-EPYC-Linux-5.4-Early
https://www.phoronix.com/scan.php?page=news_item=Linux-5.0-5.4-EPYC-7642

More recently, they updated k10temp to support temperature/voltage/CCD
temperature reporting for AMD chips:

https://lore.kernel.org/lkml/20200116141800.9828-1-li...@roeck-us.net/
https://www.phoronix.com/scan.php?page=news_item=AMD-Ryzen-k10temp-CCD-V-Current
https://www.phoronix.com/scan.php?page=news_item=Linux-5.6-HWMON-Changes

This has landed as part of Kernel 5.6 (just released):

https://kernelnewbies.org/Linux_5.6
https://www.phoronix.com/scan.php?page=article=linux-56-features=1

So the plan is that Proxmox 6.2 (2020 Q2) will ship with Kernel 5.4, right?

But 5.4 won't be updated for 1-2 years after that? Would pve-test have
newer kernels, if we were willing to put up with some rough edges?

Do you know if the k10temp patches can be backported?

Thanks,
Victor

On Wed, Mar 25, 2020 at 7:02 PM Thomas Lamprecht 
wrote:

> Hi,
>
> On 3/25/20 12:15 AM, Victor Hooi wrote:
> > Hi,
> >
> > I'm running Proxmox on a AMD Rome box - keen to do some testing on more
> > recent kernel builds, due to the performance improvements and better
> > power/temperature monitoring.
> >
> > Proxmox is currently on kernel 5.4 - curious when the 5.6 kernel
> (released)
> > might land?
>
> 5.4 is currently still rather for testing, even if it runs very well:
> https://forum.proxmox.com/threads/linux-kernel-5-4-for-proxmox-ve.66854/
>
> >
> > And what sort of lead-time would we be looking at for say, 5.7 (or
> kernels
> > in general) to his Proxmox?
>
> The 5.4 kernel will remain during the remaining releases cycle of Proxmox
> VE
> 6.X - so about the next two years.
> Due to it being a upstream LTS and a Ubuntu LTS kernel, some HW support
> should
> be back ported just fine.
>
> The idea of a opt-in newer kernel (e.g., without ZFS, as this often needs a
> bit time to release an adapted to recent kernels version) is a idea I
> personally like from just the idea. But that would double the effort
> required
> for the kernel release work - so for now we won't do that.
>
> A newer, official supported, kernel may then land in one to two years, with
> the 7.0 beta (really guessing, currently nobody knows what the future
> holds).
>
> If you have specific problems and even know the fix required, we and
> upstream
> will always try to backport this to the stable LTS kernel, at least if it
> isn't
> an invasive fix.
>
> cheers,
> Thomas
>
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH V2 pve-manager] API2: Network: display vnets in any_bridge

2020-04-08 Thread Alexandre DERUMIER
zz 





Alexandre Derumier 
Ingénieur système et stockage 

Manager Infrastructure 


Fixe : +33 3 59 82 20 10 



125 Avenue de la république 
59110 La Madeleine 
[ https://twitter.com/OdisoHosting ] [ https://twitter.com/mindbaz ] [ 
https://www.linkedin.com/company/odiso ] [ 
https://www.viadeo.com/fr/company/odiso ] [ 
https://www.facebook.com/monsiteestlent ] 

[ https://www.monsiteestlent.com/ | MonSiteEstLent.com ] - Blog dédié à la 
webperformance et la gestion de pics de trafic 






De: "aderumier"  
À: "Thomas Lamprecht"  
Cc: "pve-devel"  
Envoyé: Mercredi 8 Avril 2020 20:28:35 
Objet: Re: applied: [pve-devel] [PATCH V2 pve-manager] API2: Network: display 
vnets in any_bridge 

>>I also had some small testing of the gui done a bit ago, and now every time 
>>I install the libpve-network-perl package and thus activate the SDN stuff I 
>>get quite some errors/warnings logged: 


mmm, pvestatd look at zone/vnets deployement status. 
Not sure why $type is empty here. 

But, looking at your config, "uplink-id" is not used anymore. 

(Do you have use the api to generate the config ? or maybe it was the config 
from a previous test some months ago ?) 

- Mail original - 
De: "Thomas Lamprecht"  
À: "pve-devel" , "aderumier"  
Envoyé: Mercredi 8 Avril 2020 20:04:49 
Objet: applied: [pve-devel] [PATCH V2 pve-manager] API2: Network: display vnets 
in any_bridge 

Hi Alexandre, 

On 4/8/20 6:14 PM, Alexandre DERUMIER wrote: 
> Hi Thomas, 
> 
> Any news about this patch ? 
> 

applied, mainly waited for some more stuff to bump pve-network (as this 
patch uses the new method over there), but scratch that :) 

> This is the last missing part for the beta testing. 
> 
> I have seen a lot of users on the forum with questions with routed setup, 
> with server at hertzer or other hosting providers. 
> 
> I really think that bgp-evpn will help a lot (I'll be easy to defined new 
> vnets/subnet with anycast gateway) 

Let's see and hope you're right :) 

> 
> they seem to be good candidates for testing sdn :) 


I also had some small testing of the gui done a bit ago, and now every time 
I install the libpve-network-perl package and thus activate the SDN stuff I 
get quite some errors/warnings logged: 

# journalctl -f 
-- Logs begin at Sat 2020-02-01 00:04:54 CET. -- 
Apr 08 19:59:51 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204. 
Apr 08 19:59:51 dev6 pvestatd[2014]: sdn status update error: unknown section 
type '' 
Apr 08 20:00:01 dev6 pvestatd[2014]: local sdn network configuration is not yet 
generated, please reload at /usr/share/perl5/PVE/Network/SDN/Zones.pm line 180. 
Apr 08 20:00:01 dev6 pvestatd[2014]: Use of uninitialized value $type in hash 
element at /usr/share/perl5/PVE/SectionConfig.pm line 202. 
Apr 08 20:00:01 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204. 
Apr 08 20:00:01 dev6 pvestatd[2014]: sdn status update error: unknown section 
type '' 
Apr 08 20:00:11 dev6 pvestatd[2014]: local sdn network configuration is not yet 
generated, please reload at /usr/share/perl5/PVE/Network/SDN/Zones.pm line 180. 
Apr 08 20:00:11 dev6 pvestatd[2014]: Use of uninitialized value $type in hash 
element at /usr/share/perl5/PVE/SectionConfig.pm line 202. 
Apr 08 20:00:11 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204. 
Apr 08 20:00:11 dev6 pvestatd[2014]: sdn status update error: unknown section 
type '' 
^C 

# cat /etc/pve/sdn/zones.cfg 
vxlan: test 
uplink-id 1 
nodes dev6 
unicast-address 192.168.30.58 

vxlan: foo 
uplink-id 2 

vxlan: asd 
uplink-id 1 


# cat /etc/pve/sdn/vnets.cfg 
vnet: asd 
tag 1 
zone foo 


do you have an idea? (I did not looked into it at all) 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH V2 pve-manager] API2: Network: display vnets in any_bridge

2020-04-08 Thread Alexandre DERUMIER
>>I also had some small testing of the gui done a bit ago, and now every time 
>>I install the libpve-network-perl package and thus activate the SDN stuff I 
>>get quite some errors/warnings logged: 


mmm, pvestatd look at zone/vnets deployement status.
Not sure why $type is empty here.

But, looking at your config, "uplink-id" is not used anymore.

(Do you have use the api to generate the config ? or maybe it was the config 
from a previous test some months ago ?)

- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" , "aderumier" 
Envoyé: Mercredi 8 Avril 2020 20:04:49
Objet: applied: [pve-devel] [PATCH V2 pve-manager] API2: Network: display vnets 
in any_bridge

Hi Alexandre, 

On 4/8/20 6:14 PM, Alexandre DERUMIER wrote: 
> Hi Thomas, 
> 
> Any news about this patch ? 
> 

applied, mainly waited for some more stuff to bump pve-network (as this 
patch uses the new method over there), but scratch that :) 

> This is the last missing part for the beta testing. 
> 
> I have seen a lot of users on the forum with questions with routed setup, 
> with server at hertzer or other hosting providers. 
> 
> I really think that bgp-evpn will help a lot (I'll be easy to defined new 
> vnets/subnet with anycast gateway) 

Let's see and hope you're right :) 

> 
> they seem to be good candidates for testing sdn :) 


I also had some small testing of the gui done a bit ago, and now every time 
I install the libpve-network-perl package and thus activate the SDN stuff I 
get quite some errors/warnings logged: 

# journalctl -f 
-- Logs begin at Sat 2020-02-01 00:04:54 CET. -- 
Apr 08 19:59:51 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204. 
Apr 08 19:59:51 dev6 pvestatd[2014]: sdn status update error: unknown section 
type '' 
Apr 08 20:00:01 dev6 pvestatd[2014]: local sdn network configuration is not yet 
generated, please reload at /usr/share/perl5/PVE/Network/SDN/Zones.pm line 180. 
Apr 08 20:00:01 dev6 pvestatd[2014]: Use of uninitialized value $type in hash 
element at /usr/share/perl5/PVE/SectionConfig.pm line 202. 
Apr 08 20:00:01 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204. 
Apr 08 20:00:01 dev6 pvestatd[2014]: sdn status update error: unknown section 
type '' 
Apr 08 20:00:11 dev6 pvestatd[2014]: local sdn network configuration is not yet 
generated, please reload at /usr/share/perl5/PVE/Network/SDN/Zones.pm line 180. 
Apr 08 20:00:11 dev6 pvestatd[2014]: Use of uninitialized value $type in hash 
element at /usr/share/perl5/PVE/SectionConfig.pm line 202. 
Apr 08 20:00:11 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204. 
Apr 08 20:00:11 dev6 pvestatd[2014]: sdn status update error: unknown section 
type '' 
^C 

# cat /etc/pve/sdn/zones.cfg 
vxlan: test 
uplink-id 1 
nodes dev6 
unicast-address 192.168.30.58 

vxlan: foo 
uplink-id 2 

vxlan: asd 
uplink-id 1 


# cat /etc/pve/sdn/vnets.cfg 
vnet: asd 
tag 1 
zone foo 


do you have an idea? (I did not looked into it at all) 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH V2 pve-manager] API2: Network: display vnets in any_bridge

2020-04-08 Thread Thomas Lamprecht
Hi Alexandre,

On 4/8/20 6:14 PM, Alexandre DERUMIER wrote:
> Hi Thomas,
> 
> Any news about this patch ? 
> 

applied, mainly waited for some more stuff to bump pve-network (as this
patch uses the new method over there), but scratch that :)

> This is the last missing part for the beta testing.
> 
> I have seen a lot of users on the forum with questions with routed setup, 
> with server at hertzer or other hosting providers.
> 
> I really think that bgp-evpn will help a lot (I'll be easy to defined new 
> vnets/subnet with anycast gateway)

Let's see and hope you're right :)

> 
> they seem to be good candidates for testing sdn :)


I also had some small testing of the gui done a bit ago, and now every time
I install the libpve-network-perl package and thus activate the SDN stuff I
get quite some errors/warnings logged:

# journalctl -f
-- Logs begin at Sat 2020-02-01 00:04:54 CET. --
Apr 08 19:59:51 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204.
Apr 08 19:59:51 dev6 pvestatd[2014]: sdn status update error: unknown section 
type ''
Apr 08 20:00:01 dev6 pvestatd[2014]: local sdn network configuration is not yet 
generated, please reload at /usr/share/perl5/PVE/Network/SDN/Zones.pm line 180.
Apr 08 20:00:01 dev6 pvestatd[2014]: Use of uninitialized value $type in hash 
element at /usr/share/perl5/PVE/SectionConfig.pm line 202.
Apr 08 20:00:01 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204.
Apr 08 20:00:01 dev6 pvestatd[2014]: sdn status update error: unknown section 
type ''
Apr 08 20:00:11 dev6 pvestatd[2014]: local sdn network configuration is not yet 
generated, please reload at /usr/share/perl5/PVE/Network/SDN/Zones.pm line 180.
Apr 08 20:00:11 dev6 pvestatd[2014]: Use of uninitialized value $type in hash 
element at /usr/share/perl5/PVE/SectionConfig.pm line 202.
Apr 08 20:00:11 dev6 pvestatd[2014]: Use of uninitialized value $type in 
concatenation (.) or string at /usr/share/perl5/PVE/SectionConfig.pm line 204.
Apr 08 20:00:11 dev6 pvestatd[2014]: sdn status update error: unknown section 
type ''
^C

# cat /etc/pve/sdn/zones.cfg 
vxlan: test
uplink-id 1
nodes dev6
unicast-address 192.168.30.58

vxlan: foo
uplink-id 2

vxlan: asd
uplink-id 1


# cat /etc/pve/sdn/vnets.cfg 
vnet: asd
tag 1
zone foo


do you have an idea? (I did not looked into it at all)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH qemu-server] Ignore version checks when using QEMU -rc releases

2020-04-08 Thread Thomas Lamprecht
On 4/8/20 5:11 PM, Stefan Reiter wrote:
> Upstream marks these as having a micro-version of >=90, unfortunately the
> machine versions are bumped earlier so testing them is made unnecessarily
> difficult, since the version checking code would abort on migrations etc...
> 
> Signed-off-by: Stefan Reiter 
> ---
>  PVE/QemuServer.pm | 17 +++--
>  1 file changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index e9eb421..4d05db3 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -2973,13 +2973,18 @@ sub config_to_command {
>  
>  $machine_version =~ m/(\d+)\.(\d+)/;
>  my ($machine_major, $machine_minor) = ($1, $2);
> -die "Installed QEMU version '$kvmver' is too old to run machine type 
> '$machine_type', please upgrade node '$nodename'\n"
> - if !PVE::QemuServer::min_version($kvmver, $machine_major, 
> $machine_minor);
>  
> -if 
> (!PVE::QemuServer::Machine::can_run_pve_machine_version($machine_version, 
> $kvmver)) {
> - my $max_pve_version = 
> PVE::QemuServer::Machine::get_pve_version($machine_version);
> - die "Installed qemu-server (max feature level for 
> $machine_major.$machine_minor is pve$max_pve_version)"
> -   . " is too old to run machine type '$machine_type', please upgrade 
> node '$nodename'\n";
> +if ($kvmver =~ m/^\d+\.\d+\.(\d+)/ && $1 >= 90) {
> + warn "warning: Installed QEMU version ($kvmver) is a release candidate, 
> ignoring version checks\n";
> +} else {
> + die "Installed QEMU version '$kvmver' is too old to run machine type 
> '$machine_type', please upgrade node '$nodename'\n"
> + if !PVE::QemuServer::min_version($kvmver, $machine_major, 
> $machine_minor);
> +
> + if 
> (!PVE::QemuServer::Machine::can_run_pve_machine_version($machine_version, 
> $kvmver)) {
> + my $max_pve_version = 
> PVE::QemuServer::Machine::get_pve_version($machine_version);
> + die "Installed qemu-server (max feature level for 
> $machine_major.$machine_minor is pve$max_pve_version)"
> + . " is too old to run machine type '$machine_type', please upgrade 
> node '$nodename'\n";
> + }
>  }
>  
>  # if a specific +pve version is required for a feature, use 
> $version_guard
> 

applied, but moved to a more linear 

if () {

} elseif () {

} elseif () {

}

structure. This block could be in it's own method, config_to_command is to
much spaghetti - and not the good Italian one ;-) anyway: thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH V2 pve-manager] API2: Network: display vnets in any_bridge

2020-04-08 Thread Alexandre DERUMIER
Hi Thomas,

Any news about this patch ? 

This is the last missing part for the beta testing.

I have seen a lot of users on the forum with questions with routed setup, with 
server at hertzer or other hosting providers.

I really think that bgp-evpn will help a lot (I'll be easy to defined new 
vnets/subnet with anycast gateway)

they seem to be good candidates for testing sdn :)




- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" , "aderumier" 
Envoyé: Vendredi 27 Mars 2020 07:48:07
Objet: Re: [pve-devel] [PATCH V2 pve-manager] API2: Network: display vnets in 
any_bridge

On 3/26/20 3:07 AM, Alexandre Derumier wrote: 
> Signed-off-by: Alexandre Derumier  
> --- 
> PVE/API2/Network.pm | 8  
> 1 file changed, 8 insertions(+) 
> 
> diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm 
> index 4c691879..32ab8ebd 100644 
> --- a/PVE/API2/Network.pm 
> +++ b/PVE/API2/Network.pm 
> @@ -18,6 +18,7 @@ use base qw(PVE::RESTHandler); 
> 
> my $have_sdn; 
> eval { 
> + require PVE::Network::SDN; 
> require PVE::Network::SDN::Zones; 
> require PVE::Network::SDN::Controllers; 
> $have_sdn = 1; 
> @@ -246,6 +247,13 @@ __PACKAGE__->register_method({ 
> ($type eq 'bridge' || $type eq 'OVSBridge')); 
> delete $ifaces->{$k} if !$match; 
> } 
> + 
> + if ($have_sdn && $param->{type} eq 'any_bridge') { 
> + my $vnets = PVE::Network::SDN::get_local_vnets(); 
> + map { 
> + $ifaces->{$_} = $vnets->{$_}; 
> + } keys %$vnets; 
> + } 
> } 
> 
> return PVE::RESTHandler::hash_to_array($ifaces, 'iface'); 
> 

nicer, IMO :) Is there still stuff open for pve-network repo, or is 
something coming quite soon? 

Because this needs bumping, and I'd do that depending on how your 
plans about sending new stuff are. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH qemu-server] Ignore version checks when using QEMU -rc releases

2020-04-08 Thread Stefan Reiter
Upstream marks these as having a micro-version of >=90, unfortunately the
machine versions are bumped earlier so testing them is made unnecessarily
difficult, since the version checking code would abort on migrations etc...

Signed-off-by: Stefan Reiter 
---
 PVE/QemuServer.pm | 17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e9eb421..4d05db3 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2973,13 +2973,18 @@ sub config_to_command {
 
 $machine_version =~ m/(\d+)\.(\d+)/;
 my ($machine_major, $machine_minor) = ($1, $2);
-die "Installed QEMU version '$kvmver' is too old to run machine type 
'$machine_type', please upgrade node '$nodename'\n"
-   if !PVE::QemuServer::min_version($kvmver, $machine_major, 
$machine_minor);
 
-if 
(!PVE::QemuServer::Machine::can_run_pve_machine_version($machine_version, 
$kvmver)) {
-   my $max_pve_version = 
PVE::QemuServer::Machine::get_pve_version($machine_version);
-   die "Installed qemu-server (max feature level for 
$machine_major.$machine_minor is pve$max_pve_version)"
- . " is too old to run machine type '$machine_type', please upgrade 
node '$nodename'\n";
+if ($kvmver =~ m/^\d+\.\d+\.(\d+)/ && $1 >= 90) {
+   warn "warning: Installed QEMU version ($kvmver) is a release candidate, 
ignoring version checks\n";
+} else {
+   die "Installed QEMU version '$kvmver' is too old to run machine type 
'$machine_type', please upgrade node '$nodename'\n"
+   if !PVE::QemuServer::min_version($kvmver, $machine_major, 
$machine_minor);
+
+   if 
(!PVE::QemuServer::Machine::can_run_pve_machine_version($machine_version, 
$kvmver)) {
+   my $max_pve_version = 
PVE::QemuServer::Machine::get_pve_version($machine_version);
+   die "Installed qemu-server (max feature level for 
$machine_major.$machine_minor is pve$max_pve_version)"
+   . " is too old to run machine type '$machine_type', please upgrade 
node '$nodename'\n";
+   }
 }
 
 # if a specific +pve version is required for a feature, use $version_guard
-- 
2.26.0


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC v2 manager 1/5] api: backup: add endpoint to list included guests and volumes

2020-04-08 Thread Aaron Lauterer



On 4/7/20 5:55 PM, Thomas Lamprecht wrote:
> On 4/6/20 4:24 PM, Aaron Lauterer wrote:
>> [...]
>> +description => 'Root node of the tree object. Children 
represent guests, grandchildren represent volumes of that guest.',

>> +properties => {
>> +not_all_permissions => {
>> +type => 'boolean',
>> +optional => 1,
>> +description => 'Wheter the user is missing permissions to 
view all guests.',

>
> s/Wheter/Whether/

thanks for catching that typo
>
>> +},
>> +children => {
>> +type => 'array',
>> +items => {
>> +type => 'object',
>> +properties => {
>> +id => {
>> +type => 'integer',
>> +description => 'VMID of the guest.',
>> +},
>> +name => {
>> +type => 'string',
>> +description => 'Name of the guest',
>> +optional => 1,
>> +},
>> +type => {
>> +type => 'string',
>> +description => 'Type of the guest, VM or CT.',
>> +enum => ['qemu', 'lxc', 'unknown'],
>
> We die in the unknown case, so that cannot be returned
right
>
>> +},
>> +children => {
>> +type => 'array',
>> +optional => 1,
>> +description => 'The volumes of the guest with the 
information if they will be included in backups.',

>> +items => {
>> +type => 'object',
>> +properties => {
>> +id => {
>> +type => 'string',
>> +description => 'Configuration key of the volume.',
>> +},
>> +name => {
>> +type => 'string',
>> +description => 'Name of the volume.',
>> +},
>> +included => {
>> +type => 'boolean',
>> +description => 'Wheter the volume is included 
in the backup or not.',

>> +},
>> +reason => {
>> +type => 'string',
>> +description => 'The reason why the volume is 
included (or excluded).',

>> +},
>> +},
>> +},
>> +},
>> +},
>> +},
>> +},
>> +},
>> +},
>> +code => sub {
>> +my ($param) = @_;
>> +
>> +my $rpcenv = PVE::RPCEnvironment::get();
>> +
>> +my $user = $rpcenv->get_user();
>> +
>> +my $vzconf = cfs_read_file('vzdump.cron');
>> +my $all_jobs = $vzconf->{jobs} || [];
>> +my $job;
>> +my $rrd = PVE::Cluster::rrd_dump();
>> +
>> +foreach my $j (@$all_jobs) {
>> +$job = $j if $j->{id} eq $param->{id};
>
> This could let the reader think that this is a bug, as it looks like 
it gets
> overwritten, plus on huge amount of jobs you would iterate a lot of 
those for

> nothing, if the $job with the requested id was found early. Rather do:
>
> if ($j->{id} eq $param->{id}) {
>  $job = $j;
>  last;
> }
>
> This makes it much more explicit and easier to grasp when just 
quickly reading over

> the code.
will do

>
>> +}
>> +raise_param_exc({ id => "No such job '$param->{id}'" }) if !$job;
>> +
>> +my $vmlist = PVE::Cluster::get_vmlist();
>> +
>> +my @vmids = @{PVE::VZDump->get_included_guests($job)};
>
> s/vmid/job_vmids/
>
> and the comment on the other series regarding passing this as reference,
> and even as hash.
Yeah, this will be adapted to the changes of the other series, one of 
the reasons why this is labeled as RFC


>
>> +
>> +# remove VMIDs to which the user has no permission to not leak 
infos

>> +# like the guest name
>> +my $not_all_permissions = 0;
>> +@vmids = grep {
>> +my $allowed = $rpcenv->check($user, "/vms/$_", [ 
'VM.Backup' ], 1);

>
> hmm, is VM.Backup really the info we want to hide, why not VM.Audit?
> (or those which the user has either permission?)
>
> As VM.Backup is the permission for being allowed to make a backup or 
restore one,

> not for being allowed to view a VM in the cluster.

I haven't fully groked the permission system yet. What I want to achieve 
is to not show any VMs the user does not have permission to see.

I think VM.Audit should be enough then.

>
>> +$not_all_permissions = 1 if !$allowed && !$job->{all};
>
> This could be found out afterwards, avoids a O(n) expression. 
$job->{all} is
> available anyway and if you remember how many elements @vmids had 
before you

> can just check if it has less after the permission checking. That has the
> additional advantage that you can omit the $allowed intermediate 
variable. For

> example (not fully thought out - so don't just copy 1:1)
>
> my $job_guest_count = scalar(@vmids);
> my @allowed_vmids = grep {
> $rpcenv->check_any($user, "/vms/$_", [ 'VM.Audit', 

Re: [pve-devel] [PATCH manager 4/4] gui: settings: Add recursive search default

2020-04-08 Thread Dominik Csapak

comment inline

On 4/2/20 1:34 PM, Dominic Jäger wrote:

Add a radiobox to the settings to control if search should be done recursively
as default or not. Set to no recursion as default.

Signed-off-by: Dominic Jäger 
---
This did not exist in RFC

  www/manager6/storage/ContentView.js |  2 +-
  www/manager6/window/Settings.js | 38 +
  2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/www/manager6/storage/ContentView.js 
b/www/manager6/storage/ContentView.js
index 5c6f1418..fb87f5e6 100644
--- a/www/manager6/storage/ContentView.js
+++ b/www/manager6/storage/ContentView.js
@@ -587,7 +587,7 @@ Ext.define('PVE.storage.ContentView', {
fieldLabel: gettext('Recursive'),
labelWidth: 65,
name : 'recursive',
-   checked: false,
+   checked: me.sp.get('recursive-search'),
listeners: {
change: function(box, value) {
me.store.proxy.url = me.store.proxy.url.replace(
diff --git a/www/manager6/window/Settings.js b/www/manager6/window/Settings.js
index 2fa01ef0..af708898 100644
--- a/www/manager6/window/Settings.js
+++ b/www/manager6/window/Settings.js
@@ -41,6 +41,9 @@ Ext.define('PVE.window.Settings', {
if (vncMode !== undefined) {
me.lookupReference('noVNCScalingGroup').setValue({ 
noVNCScalingField: vncMode });
}
+   var spSearchValue = sp.get('recursive-search');
+   me.lookupReference('recursiveSearchGroup').setValue({
+   recursiveSearchField: spSearchValue });
  
  	let summarycolumns = sp.get('summarycolumns', 'auto');

me.lookup('summarycolumns').setValue(summarycolumns);
@@ -428,6 +431,41 @@ Ext.define('PVE.window.Settings', {
},
},
]
+   },{
+   xtype: 'fieldset',
+   title: gettext('Storage Settings'),
+   items: [
+   {
+   xtype: 'radiogroup',
+   fieldLabel: gettext('Recursive Search'),
+   reference: 'recursiveSearchGroup',
+   height: '15px', // renders faster with value assigned
+   layout: {
+   type: 'hbox',
+   },
+   items: [
+   {
+   xtype: 'radiofield',
+   name: 'recursiveSearchField',
+   inputValue: 1,
+   boxLabel: gettext('On'),
+   },{
+   xtype: 'radiofield',
+   name: 'recursiveSearchField',
+   inputValue: 0,
+   boxLabel: gettext('Off'),
+   margin: '0 0 0 10',
+   checked: true,
+   }
+   ],
+   listeners: {
+   change: function(el, newValue, undefined) {
+   var sp = Ext.state.Manager.getProvider();
+   sp.set('recursive-search', 
newValue.recursiveSearchField);


same comment about int/string as the previous patch:
'0' and '1' then you do not have to convert


+   }
+   },
+   },
+   ]
},
]
  }],



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 3/4] gui: content view: Add checkbox for recursive search

2020-04-08 Thread Dominik Csapak

comments inline

On 4/2/20 1:34 PM, Dominic Jäger wrote:

Default is no recursion.
This commit depends on "Recursive search for iso and vztmpl" in pve-storage.

Signed-off-by: Dominic Jäger 
---
Changes since RFC:
* [0] became obsolte
* This did not exist in RFC

[0] https://pve.proxmox.com/pipermail/pve-devel/2019-December/040886.html

  www/manager6/storage/ContentView.js | 23 +--
  1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/www/manager6/storage/ContentView.js 
b/www/manager6/storage/ContentView.js
index 001efc7f..5c6f1418 100644
--- a/www/manager6/storage/ContentView.js
+++ b/www/manager6/storage/ContentView.js
@@ -379,12 +379,15 @@ Ext.define('PVE.storage.ContentView', {
}
  
  	var baseurl = "/nodes/" + nodename + "/storage/" + storage + "/content";

+   me.sp = Ext.state.Manager.getProvider();
+
var store = Ext.create('Ext.data.Store',{
model: 'pve-storage-content',
groupField: 'content',
proxy: {
  type: 'proxmox',
-   url: '/api2/json' + baseurl
+   url: '/api2/json' + baseurl + '?recursive='
+   + (me.sp.get('recursive-search')|0), // API expects integer


four things here:

1. i would prefer to get that info out early and simply use a variable
   let recursive = me.sp.get... above and here only use the variable
2. we now can use modern js syntax, so a template string is allowed:
   `/api2/json${baseurl}?recursive=${recursive}`
   (we could even use extjs' query string builder for this)
3. the api expects integer, but why do here convert a string
   to int to string again? why not save a string directly?
4. there is nothing in this patch that sets this
   why not introduce this in the next patch?


},
sorters: {
property: 'volid',
@@ -578,7 +581,23 @@ Ext.define('PVE.storage.ContentView', {
]);
}
}
-   }
+   },
+   {
+   xtype: 'proxmoxcheckbox',
+   fieldLabel: gettext('Recursive'),
+   labelWidth: 65,
+   name : 'recursive',
+   checked: false,
+   listeners: {
+   change: function(box, value) {
+   me.store.proxy.url = me.store.proxy.url.replace(
+   /recursive=\d/,
+   "recursive=" + (value|0) // API expects integer


i would rather have a helper function that assembles the url
and reuse it here instead of string replacement

also you can set the checked/unchecked value to strings
---
uncheckedValue: '0',
checkedValue: '0',
---
(not sure about the names, please see the docs)
then its not needed to convert to int/string


+   );
+   reload();
+   },
+   },
+   },
],
columns: [
{



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH storage 2/4] Recursive search for iso and vztmpl

2020-04-08 Thread Dominik Csapak

comments inline

On 4/2/20 1:34 PM, Dominic Jäger wrote:

Each content type has a predefined location in a storage where they can be
found. iso and vztmpl files can now be found and and also uploaded in
subdirectories of those locations, too.

Add a parameter "recursive" (default off) to the API at
storage/{storage}/content, to perform this search.

Signed-off-by: Dominic Jäger 
---
Changes since RFC:
* Separate tests for existing and new stuff into 2 patches
* Die when .. appears in a path
* Make feature opt-in (relevant for symlinks)

  PVE/API2/Storage/Content.pm |  9 +++-
  PVE/API2/Storage/Status.pm  | 12 +++---
  PVE/Storage.pm  |  4 +-
  PVE/Storage/Plugin.pm   | 85 +
  test/run_plugin_tests.pl| 57 -
  5 files changed, 149 insertions(+), 18 deletions(-)

diff --git a/PVE/API2/Storage/Content.pm b/PVE/API2/Storage/Content.pm
index 63fa4fc..cd302e9 100644
--- a/PVE/API2/Storage/Content.pm
+++ b/PVE/API2/Storage/Content.pm
@@ -44,6 +44,12 @@ __PACKAGE__->register_method ({
optional => 1,
completion => \::Cluster::complete_vmid,
}),
+   recursive => {
+   description => "Search recursively.",
+   type => 'boolean',
+   optional => 1,
+   default => 0,
+   },
},
  },
  returns => {
@@ -102,7 +108,8 @@ __PACKAGE__->register_method ({
  
  	my $cfg = PVE::Storage::config();
  
-	my $vollist = PVE::Storage::volume_list($cfg, $storeid, $param->{vmid}, $param->{content});

+   my $vollist = PVE::Storage::volume_list($cfg, $storeid, $param->{vmid},
+   $param->{content}, {recursive => $param->{recursive}});
  
  	my $res = [];

foreach my $item (@$vollist) {
diff --git a/PVE/API2/Storage/Status.pm b/PVE/API2/Storage/Status.pm
index 14f5930..b8caf9c 100644
--- a/PVE/API2/Storage/Status.pm
+++ b/PVE/API2/Storage/Status.pm
@@ -403,25 +403,25 @@ __PACKAGE__->register_method ({
my $filename = $param->{filename};
  
  	chomp $filename;

-   $filename =~ s/^.*[\/\\]//;
-   $filename =~ s/[^-a-zA-Z0-9_.]/_/g;
+   # Prevent entering other locations without permission
+   die "Filename must not contain two dots '..'" if $filename =~ m/\.\./;


i'd argue that this check is wrong...
i get what you intend but what about files named like:

my.very.cool...iso.iso

would also be filtered out but is a very valid filename

'..' is only forbidden at the beginning, the end or between two '/'

so e.g.

m!(?:^\.\./|/\.\./|/\.\.$)!

(well, this is certainly not a readable regex^^; also i am
certainly missing a case, i think...)


+   $filename =~ s/^[\/|\\]*//; # files are uploaded relative to storage 
path


are '|' intentionally removed here? in [] the characters are or'd anyway


+   $filename =~ s/[^-a-zA-Z0-9_.\/\\]/_/g;
  
  	my $path;

-
if ($content eq 'iso') {
-   if ($filename !~ m![^/]+$PVE::Storage::iso_extension_re$!) {
+   if ($filename !~ m!$PVE::Storage::iso_extension_re$!) {
raise_param_exc({ filename => "missing '.iso' or '.img' 
extension" });
}
$path = PVE::Storage::get_iso_dir($cfg, $param->{storage});
} elsif ($content eq 'vztmpl') {
-   if ($filename !~ m![^/]+\.tar\.[gx]z$!) {
+   if ($filename !~ m!\.tar\.[gx]z$!) {
raise_param_exc({ filename => "missing '.tar.gz' or '.tar.xz' 
extension" });
}
$path = PVE::Storage::get_vztmpl_dir($cfg, $param->{storage});
} else {
raise_param_exc({ content => "upload content type '$content' not 
allowed" });
}
-
die "storage '$param->{storage}' does not support '$content' content\n"
if !$scfg->{content}->{$content};
  
diff --git a/PVE/Storage.pm b/PVE/Storage.pm

index 60b8310..9ec40a5 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -866,7 +866,7 @@ sub template_list {
  }
  
  sub volume_list {

-my ($cfg, $storeid, $vmid, $content) = @_;
+my ($cfg, $storeid, $vmid, $content, $param) = @_;
  
  my @ctypes = qw(rootdir images vztmpl iso backup snippets);
  
@@ -880,7 +880,7 @@ sub volume_list {
  
  activate_storage($cfg, $storeid);
  
-my $res = $plugin->list_volumes($storeid, $scfg, $vmid, $cts);

+my $res = $plugin->list_volumes($storeid, $scfg, $vmid, $cts, $param);
  
  @$res = sort {lc($a->{volid}) cmp lc ($b->{volid}) } @$res;
  
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm

index 8c0dae1..e81c89f 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -415,6 +415,9 @@ sub parse_name_dir {
  die "unable to parse volume filename '$name'\n";
  }
  
+# for iso and vztmpl returns

+#   [0] format
+#   [1] notdir (=basename+suffix; no directories)


nit: again the 'notdir' here...


  sub parse_volname {
  my ($class, $volname) = @_;
  
@@ -428,9 +431,11 @@ sub parse_volname {

 

Re: [pve-devel] [PATCH storage 1/4] base plugin: Increase test coverage

2020-04-08 Thread Dominik Csapak

sry for taking so long with the review,
i am looking over now, after that i will test

comments inline

On 4/2/20 1:34 PM, Dominic Jäger wrote:

Signed-off-by: Dominic Jäger 
---
This did not exist separately in RFC

  test/Makefile|   5 +-
  test/run_plugin_tests.pl | 184 +++
  2 files changed, 188 insertions(+), 1 deletion(-)
  create mode 100755 test/run_plugin_tests.pl

diff --git a/test/Makefile b/test/Makefile
index 833a597..c54b10f 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,6 @@
  all: test
  
-test: test_zfspoolplugin test_disklist test_bwlimit

+test: test_zfspoolplugin test_disklist test_bwlimit test_plugin
  
  test_zfspoolplugin: run_test_zfspoolplugin.pl

./run_test_zfspoolplugin.pl
@@ -10,3 +10,6 @@ test_disklist: run_disk_tests.pl
  
  test_bwlimit: run_bwlimit_tests.pl

./run_bwlimit_tests.pl
+
+test_plugin: run_plugin_tests.pl
+   ./run_plugin_tests.pl
diff --git a/test/run_plugin_tests.pl b/test/run_plugin_tests.pl
new file mode 100755
index 000..cd93430
--- /dev/null
+++ b/test/run_plugin_tests.pl
@@ -0,0 +1,184 @@
+#!/usr/bin/perl
+use strict;
+use warnings;
+
+use lib ('.', '..');
+use Test::More tests => 32;
+use Test::MockModule qw(new);
+use File::Temp qw(tempdir);
+use File::Path qw(make_path);
+use Data::Dumper qw(Dumper);
+use Storable qw(dclone);
+use PVE::Storage;
+
+my $plugin = 'PVE::Storage::Plugin';
+my $basename = 'test';
+
+my $iso_type = 'iso';
+my $iso_suffix = '.iso';
+my $iso_notdir = "$basename$iso_suffix";


why 'notdir' ? isn't that just the filename?
i'd rather prefer 'filename' for that..
also one could argue that is just the basename, but i get
we want to differentiate between the name and suffix

anyway i find 'notdir' very confusing^^
any other opinion @all ?


+my $iso_volname = "$iso_type/$iso_notdir";
+
+my $vztmpl_type = 'vztmpl';
+my $vztmpl_suffix = '.tar.gz';
+my $vztmpl_notdir = "$basename$vztmpl_suffix";
+my $vztmpl_volname = "$vztmpl_type/$vztmpl_notdir";
+
+my $iso_with_dots = "$iso_type/../$iso_notdir";
+my $vztmpl_with_dots = "$vztmpl_type/../$vztmpl_notdir";


this is only used in patch 2, would make more sense adding it there


+
+my $image_type = 'images';
+my $vmid = '100';
+my $image_basename = 'vm-100-disk-0';
+my $raw_image_format = 'raw'; # Tests for parse_volname don't need the dot
+my $raw_image_notdir = "$image_basename.$raw_image_format";
+my $raw_image_volname = "$vmid/$raw_image_notdir";
+my $qcow_image_format = 'qcow2';
+my $qcow_image_notdir = "$image_basename.$qcow_image_format";
+my $qcow_image_volname = "$vmid/$qcow_image_notdir";
+my $vmdk_image_format = 'vmdk';
+my $vmdk_image_notdir = "$image_basename.$vmdk_image_format";
+my $vmdk_image_volname = "$vmid/$vmdk_image_notdir";
+
+my $type_index = 0;
+my $notdir_index = 1;
+my $format_index = 6;
+
+is (($plugin->parse_volname($iso_volname))[$type_index],
+$iso_type, 'parse_volname: type for iso');
+is (($plugin->parse_volname($iso_volname))[$notdir_index],
+$iso_notdir, 'parse_volname: notdir for iso');
+
+is (($plugin->parse_volname($vztmpl_volname))[$type_index],
+$vztmpl_type, 'parse_volname: type for vztmpl');
+is (($plugin->parse_volname($vztmpl_volname))[$notdir_index],
+$vztmpl_notdir, 'parse_volname: notdir for vztmpl');
+
+is (($plugin->parse_volname($raw_image_volname))[$type_index],
+$image_type, 'parse_volname: type for raw image');
+is (($plugin->parse_volname($raw_image_volname))[$notdir_index],
+$raw_image_notdir, 'parse_volname: notdir for raw image');
+is (($plugin->parse_volname($raw_image_volname))[$format_index],
+$raw_image_format, 'parse_volname: format for raw image');
+
+is (($plugin->parse_volname($qcow_image_volname))[$type_index],
+$image_type, 'parse_volname: type for qcow image');
+is (($plugin->parse_volname($qcow_image_volname))[$notdir_index],
+$qcow_image_notdir, 'parse_volname: notdir for qcow image');
+is (($plugin->parse_volname($qcow_image_volname))[$format_index],
+$qcow_image_format, 'parse_volname: format for qcow image');
+
+is (($plugin->parse_volname($vmdk_image_volname))[$type_index],
+$image_type, 'parse_volname: type for vmdk image');
+is (($plugin->parse_volname($vmdk_image_volname))[$notdir_index],
+$vmdk_image_notdir, 'parse_volname: notdir for vmdk image');
+is (($plugin->parse_volname($vmdk_image_volname))[$format_index],
+$vmdk_image_format, 'parse_volname: format for vmdk image');


mhmm.. the tests should work (not tested yet) and are ok,
but this whole thing calls for refactoring, e.g.

my $tests =
[
{
name => "raw image",
volname => "$vmid/$image.raw",
format => 'raw',
notdir => "$image.raw",
type => "images",
},
{
...
},
...
];

my $helper = sub {
my ($volname, $index, $value, $msg) = @_;
	is (($plugin->parse_volname($volname))[$index], 

[pve-devel] [PATCH storage] Add comment for volume_has_feature

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---

Took the opportunity to document the whole function
instead of just the new valid_target_formats option.

 PVE/Storage.pm | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 60b8310..48179b5 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -285,6 +285,21 @@ sub volume_snapshot_delete {
 }
 }
 
+# check if a volume or snapshot supports a given feature
+# $feature - one of:
+#clone - linked clone is possible
+#copy  - full clone is possible
+#replicate - replication is possible
+#snapshot - taking a snapshot is possible
+#sparseinit - volume is sparsely initialized
+#template - conversion to base image is possible
+# $snap - check if the feature is supported for a given snapshot
+# $running - if the guest owning the volume is running
+# $opts - hash with further options:
+# valid_target_formats - list of formats for the target of a copy/clone
+#operation that the caller could work with. The
+#format of $volid is always considered valid 
and if
+#no list is specified, all formats are 
considered valid.
 sub volume_has_feature {
 my ($cfg, $feature, $volid, $snap, $running, $opts) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 7/7] Fix: #2124 storage: add zstd support

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm | 4 +++-
 PVE/Storage/Plugin.pm  | 2 +-
 test/test_archive_info.pm  | 9 ++---
 test/test_list_volumes.pm  | 4 
 test/test_parse_volname.pm | 3 +++
 test/test_path_to_volume_id.pm | 2 ++
 6 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 46384ff..df477a7 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1295,10 +1295,12 @@ sub decompressor_info {
tar => {
gz => ['tar', '-z'],
lzo => ['tar', '--lzop'],
+   zst => ['tar', '--zstd'],
},
vma => {
gz => ['zcat'],
lzo => ['lzop', '-d', '-c'],
+   zst => ['zstd', '-q', '-d', '-c'],
},
 };
 
@@ -1389,7 +1391,7 @@ sub extract_vzdump_config_vma {
my $errstring;
my $err = sub {
my $output = shift;
-   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/) {
+   if ($output =~ m/lzop: Broken pipe: / || $output =~ m/gzip: 
stdout: Broken pipe/ || $output =~ m/zstd: error 70 : Write error : Broken 
pipe/) {
$broken_pipe = 1;
} elsif (!defined ($errstring) && $output !~ m/^\s*$/) {
$errstring = "Failed to extract config from VMA archive: 
$output\n";
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index ef6e6de..ec434a4 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -17,7 +17,7 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
-use constant COMPRESSOR_RE => 'gz|lzo';
+use constant COMPRESSOR_RE => 'gz|lzo|zst';
 
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
diff --git a/test/test_archive_info.pm b/test/test_archive_info.pm
index 464cc89..cd126f8 100644
--- a/test/test_archive_info.pm
+++ b/test/test_archive_info.pm
@@ -10,19 +10,22 @@ use Test::More;
 
 my @tests = (
 # backup archives
-[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => undef, 'compression' => undef },   
 'Backup archive, vma' ],
-[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['zcat'], 'compression' => 'gz' }, 
 'Backup archive, vma, gz' ],
-[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['lzop', '-d', '-c'], 
'compression' => 'lzo' }, 'Backup archive, vma, lzo' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => undef, 'compression' => undef },   
   'Backup archive, vma' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['zcat'], 'compression' => 'gz' }, 
   'Backup archive, vma, gz' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['lzop', '-d', '-c'], 
'compression' => 'lzo' },   'Backup archive, vma, lzo' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.zst', { 'type' => 
'qemu', 'format' => 'vma', 'decompressor' => ['zstd', '-q', '-d', '-c'], 
'compression' => 'zst' }, 'Backup archive, vma, zst' ],
 
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => undef, 'compression' => undef },
 'Backup archive, lxc' ],
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.gz',  { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '-z'], 'compression' => 
'gz' },  'Backup archive, lxc, gz' ],
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tgz', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '-z'], 'compression' => 
'gz' },  'Backup archive, lxc, tgz' ],
 [ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '--lzop'], 'compression' => 
'lzo' }, 'Backup archive, lxc, lzo' ],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.zst', { 'type' => 
'lxc', 'format' => 'tar', 'decompressor' => ['tar', '--zstd'], 'compression' => 
'zst' }, 'Backup archive, lxc, zst' ],
 
 [ 'backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar', { 'type' => 
'openvz', 'format' => 'tar', 'decompressor' => undef, 'compression' => undef }, 
'Backup archive, openvz' ],
 [ 'backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar.gz',  { 'type' => 
'openvz', 'format' => 'tar', 'decompressor' => ['tar', '-z'], 'compression' => 
'gz' },  'Backup archive, openvz, gz' ],
 [ 'backup/vzdump-openvz-16112-2020_03_30-21_39_30.tgz', { 'type' => 
'openvz', 'format' => 'tar', 'decompressor' => ['tar', '-z'], 'compression' => 
'gz' },  'Backup archive, openvz, tgz' ],
 [ 

[pve-devel] [PATCH qemu-server v3 1/2] restore: replace archive

2020-04-08 Thread Alwin Antreich
format/compression regex to reduce the code duplication, as archive_info
and decompressor_info provides the same information as well.

Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 36 ++--
 1 file changed, 6 insertions(+), 30 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 510a995..e5bf41b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5501,28 +5501,9 @@ sub tar_restore_cleanup {
 sub restore_file_archive {
 my ($archive, $vmid, $user, $opts) = @_;
 
-my $format = $opts->{format};
-my $comp;
-
-if ($archive =~ m/\.tgz$/ || $archive =~ m/\.tar\.gz$/) {
-   $format = 'tar' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.tar$/) {
-   $format = 'tar' if !$format;
-} elsif ($archive =~ m/.tar.lzo$/) {
-   $format = 'tar' if !$format;
-   $comp = 'lzop';
-} elsif ($archive =~ m/\.vma$/) {
-   $format = 'vma' if !$format;
-} elsif ($archive =~ m/\.vma\.gz$/) {
-   $format = 'vma' if !$format;
-   $comp = 'gzip';
-} elsif ($archive =~ m/\.vma\.lzo$/) {
-   $format = 'vma' if !$format;
-   $comp = 'lzop';
-} else {
-   $format = 'vma' if !$format; # default
-}
+my $info = PVE::Storage::archive_info($archive);
+my $format = $opts->{format} // $info->{format};
+my $comp = $info->{compression};
 
 # try to detect archive format
 if ($format eq 'tar') {
@@ -6109,14 +6090,9 @@ sub restore_vma_archive {
 }
 
 if ($comp) {
-   my $cmd;
-   if ($comp eq 'gzip') {
-   $cmd = ['zcat', $readfrom];
-   } elsif ($comp eq 'lzop') {
-   $cmd = ['lzop', '-d', '-c', $readfrom];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
+   my $info = PVE::Storage::decompressor_info('vma', $comp);
+   my $cmd = $info->{decompressor};
+   push @$cmd, $readfrom;
$add_pipe->($cmd);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 6/7] backup: more compact regex for backup filter

2020-04-08 Thread Alwin Antreich
The more compact form of the regex should allow easier addition of new
file extensions.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage.pm| 4 ++--
 PVE/Storage/Plugin.pm | 6 --
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0bbd168..46384ff 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -519,7 +519,7 @@ sub path_to_volume_id {
} elsif ($path =~ m!^$privatedir/(\d+)$!) {
my $vmid = $1;
return ('rootdir', "$sid:rootdir/$vmid");
-   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!) {
+   } elsif ($path =~ 
m!^$backupdir/([^/]+\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
my $name = $1;
return ('iso', "$sid:backup/$name");
}
@@ -1321,7 +1321,7 @@ sub archive_info {
 my $info;
 
 my $volid = basename($archive);
-if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?$/)
 {
$info = decompressor_info($2, $3);
$info->{type} = $1;
 } else {
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 0ab44ce..ef6e6de 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -17,6 +17,8 @@ use JSON;
 
 use base qw(PVE::SectionConfig);
 
+use constant COMPRESSOR_RE => 'gz|lzo';
+
 our @COMMON_TAR_FLAGS = qw(
 --one-file-system
 -p --sparse --numeric-owner --acls
@@ -434,7 +436,7 @@ sub parse_volname {
return ('vztmpl', $1);
 } elsif ($volname =~ m!^rootdir/(\d+)$!) {
return ('rootdir', $1, $1);
-} elsif ($volname =~ 
m!^backup/([^/]+(\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo)))$!) {
+} elsif ($volname =~ 
m!^backup/([^/]+(?:\.(?:tgz|(?:(?:tar|vma)(?:\.(?:${\COMPRESSOR_RE}))?$!) {
my $fn = $1;
if ($fn =~ m/^vzdump-(openvz|lxc|qemu)-(\d+)-.+/) {
return ('backup', $fn, $2);
@@ -949,7 +951,7 @@ my $get_subdir_files = sub {
 
} elsif ($tt eq 'backup') {
next if defined($vmid) && $fn !~  m/\S+-$vmid-\S+/;
-   next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
+   next if $fn !~ 
m!/([^/]+\.(tgz|(?:(?:tar|vma)(?:\.(${\COMPRESSOR_RE}))?)))$!;
 
my $format = $2;
$fn = $1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 3/7] test: list_volumes

2020-04-08 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/run_parser_tests.pl  |   6 +-
 test/test_list_volumes.pm | 309 ++
 2 files changed, 314 insertions(+), 1 deletion(-)
 create mode 100644 test/test_list_volumes.pm

diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
index 79093aa..4b1c003 100755
--- a/test/run_parser_tests.pl
+++ b/test/run_parser_tests.pl
@@ -6,7 +6,11 @@ use warnings;
 use TAP::Harness;
 
 my $harness = TAP::Harness->new( { verbosity => -1 });
-my $res = $harness->runtests("test_archive_info.pm", "test_parse_volname.pm");
+my $res = $harness->runtests(
+"test_archive_info.pm",
+"test_parse_volname.pm",
+"test_list_volumes.pm",
+);
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
 
diff --git a/test/test_list_volumes.pm b/test/test_list_volumes.pm
new file mode 100644
index 000..4b3fdb7
--- /dev/null
+++ b/test/test_list_volumes.pm
@@ -0,0 +1,309 @@
+package PVE::Storage::TestListVolumes;
+
+BEGIN {
+use constant DEFAULT_SIZE => 131072; # 128 kiB
+use constant DEFAULT_USED => 262144; # 256 kiB
+use constant DEFAULT_CTIME => 1234567890;
+
+# override the build-in stat routine to fix output values
+# used in $get_subdir_files
+*CORE::GLOBAL::stat = sub {
+   my ($fn) = shift;
+   my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime,
+   $mtime, $ctime, $blksize, $blocks) = CORE::stat($fn);
+
+   # fixed: file creation time
+   $ctime = DEFAULT_CTIME;
+   $size = DEFAULT_SIZE;
+
+   return ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime,
+   $mtime, $ctime, $blksize, $blocks);
+};
+}
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use PVE::Cluster;
+use PVE::Tools qw(run_command);
+
+use Test::More;
+use Test::MockModule;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+
+# get_vmlist() return values
+my $vmlist = {
+'version' => 1,
+'ids' => {
+   '16110' => {
+   'version' => 4,
+   'node' => 'x42',
+   'type' => 'qemu'
+   },
+   '16112' => {
+   'type' => 'lxc',
+   'version' => 7,
+   'node' => 'x42'
+   },
+   '16114' => {
+   'type' => 'qemu',
+   'node' => 'x42',
+   'version' => 2
+   },
+   '16113' => {
+   'version' => 5,
+   'node' => 'x42',
+   'type' => 'qemu'
+   },
+   '16115' => {
+   'node' => 'x42',
+   'version' => 1,
+   'type' => 'qemu'
+   },
+   '9004' => {
+   'type' => 'qemu',
+   'version' => 6,
+   'node' => 'x42'
+   }
+}
+};
+
+my $storage_dir = getcwd() . '/test_list_volumes';
+my $scfg = {
+'type' => 'dir',
+'maxfiles' => 0,
+'path' => $storage_dir,
+'shared' => 0,
+'content' => {
+   'iso' => 1,
+   'rootdir' => 1,
+   'vztmpl' => 1,
+   'images' => 1,
+   'snippets' => 1,
+   'backup' => 1,
+},
+};
+my @tests = (
+{
+   vmid => '16110',
+   files => [
+   "$storage_dir/images/16110/vm-16110-disk-0.qcow2",
+   "$storage_dir/images/16110/vm-16110-disk-1.qcow2",
+   "$storage_dir/images/16110/vm-16110-disk-2.qcow2",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo",
+   "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma",
+   "$storage_dir/snippets/userconfig.yaml",
+   "$storage_dir/snippets/hookscript.pl",
+   ],
+   expected => [
+   { 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-0.qcow2' },
+   { 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-1.qcow2' },
+   { 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-2.qcow2' },
+   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz' },
+   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo' },
+   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 

[pve-devel] [PATCH v3 0/13] Fix: #2124 zstd

2020-04-08 Thread Alwin Antreich
Zstandard (zstd) [0] is a data compression algorithm, in addition to
gzip, lzo for our backup/restore. It can utilize multiple core CPUs. But
by default it has one compression and one writer thread.


Here some quick tests I made on my workstation. The files where placed
on a ram disk. And with dd filled from /dev/urandom and /dev/zero.

__Compression__
file size: 1073741824 bytes
  = urandom = = zero =
  995ms 1073766414328ms 98192zstd -k
  732ms 1073766414295ms 98192zstd -k -T4
  906ms 1073791036562ms 4894779  lzop -k
31992ms 1073915558   5594ms 1042087  gzip -k
30832ms 1074069541   5776ms 1171491  pigz -k -p 1
 7814ms 1074069541   1567ms 1171491  pigz -k -p 4

__Decompression__
file size: 1073741824 bytes
= urandom =   = zero =
   712ms  869ms  zstd -d
   685ms  872ms  zstd -k -d -T4
   841ms 2462ms  lzop -d
  5417ms 4754ms  gzip -k -d
  1248ms 3118ms  pigz -k -d -p 1
  1236ms 2379ms  pigz -k -d -p 4


And I used the same ramdisk to move a VM onto it and run a quick
backup/restore.

__vzdump backup__
INFO: transferred 34359 MB in 69 seconds (497 MB/s) zstd -T1
INFO: transferred 34359 MB in 37 seconds (928 MB/s) zstd -T4
INFO: transferred 34359 MB in 51 seconds (673 MB/s) lzo
INFO: transferred 34359 MB in 1083 seconds (31 MB/s) gzip
INFO: transferred 34359 MB in 241 seconds (142 MB/s) pigz -n 4

__qmrestore__
progress 100% (read 34359738368 bytes, duration 36 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) zstd -d -T4

progress 100% (read 34359738368 bytes, duration 38 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) lzo

progress 100% (read 34359738368 bytes, duration 175 sec)
total bytes read 34359738368, sparse bytes 8005484544 (23.3%) pigz -n 4


v2 -> v3:
* split archive_info into decompressor_info and archive_info
* "compact" regex pattern is now a constant and used in
  multiple modules
* added tests for regex matching
* bug fix for ctime of backup files

v1 -> v2:
* factored out the decompressor info first, as Thomas suggested
* made the regex pattern of backup files more compact, easier to
  read (hopefully)
* less code changes for container restores

Thanks for any comment or suggestion in advance.

[0] https://facebook.github.io/zstd/

Alwin Antreich (13):
__pve-storage__
  storage: test: split archive format/compressor
  test: parse_volname
  test: list_volumes
  Fix: backup: ctime taken from stat not file name
  test: path_to_volume_id
  backup: more compact regex for backup file filter
  Fix: #2124 storage: add zstd support

 test/Makefile  |   5 +-
 PVE/Storage.pm |  85 ++---
 PVE/Storage/Plugin.pm  |   9 +-
 test/run_parser_tests.pl   |  17 ++
 test/test_archive_info.pm  |  57 ++
 test/test_list_volumes.pm  | 313 +
 test/test_parse_volname.pm |  98 +++
 test/test_path_to_volume_id.pm | 104 +++
 8 files changed, 661 insertions(+), 27 deletions(-)
 create mode 100755 test/run_parser_tests.pl
 create mode 100644 test/test_archive_info.pm
 create mode 100644 test/test_list_volumes.pm
 create mode 100644 test/test_parse_volname.pm
 create mode 100644 test/test_path_to_volume_id.pm

__pve-container__
  Fix: #2124 add zstd

 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

__qemu-server__
  restore: replace archive format/compression regex
  Fix #2124: Add support for zstd

 PVE/QemuServer.pm | 38 +++---
 1 file changed, 7 insertions(+), 31 deletions(-)

__pve-manager__
  Fix #2124: Add support for zstd

 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)

__pve-guest-common__
  Fix: #2124 add zstd support

 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH guest-common v3] Fix: #2124 add zstd support

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/VZDump/Common.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..9a7c4f6 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -88,7 +88,7 @@ my $confdesc = {
type => 'string',
description => "Compress dump file.",
optional => 1,
-   enum => ['0', '1', 'gzip', 'lzo'],
+   enum => ['0', '1', 'gzip', 'lzo', 'zstd'],
default => '0',
 },
 pigz=> {
@@ -98,6 +98,13 @@ my $confdesc = {
optional => 1,
default => 0,
 },
+zstd => {
+   type => "integer",
+   description => "Use zstd with N>0.".
+   " N=0 uses half of cores, N>1 uses N as thread count.",
+   optional => 1,
+   default => 1,
+},
 quiet => {
type => 'boolean',
description => "Be quiet.",
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 5/7] test: path_to_volume_id

2020-04-08 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/run_parser_tests.pl   |   2 +-
 test/test_path_to_volume_id.pm | 102 +
 2 files changed, 103 insertions(+), 1 deletion(-)
 create mode 100644 test/test_path_to_volume_id.pm

diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
index 4b1c003..635b59d 100755
--- a/test/run_parser_tests.pl
+++ b/test/run_parser_tests.pl
@@ -10,7 +10,7 @@ my $res = $harness->runtests(
 "test_archive_info.pm",
 "test_parse_volname.pm",
 "test_list_volumes.pm",
+"test_path_to_volume_id.pm",
 );
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
-
diff --git a/test/test_path_to_volume_id.pm b/test/test_path_to_volume_id.pm
new file mode 100644
index 000..e693974
--- /dev/null
+++ b/test/test_path_to_volume_id.pm
@@ -0,0 +1,102 @@
+package PVE::Storage::TestPathToVolumeId;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+
+use Test::More;
+
+use Cwd;
+use File::Basename;
+use File::Path qw(make_path remove_tree);
+
+my $storage_dir = getcwd() . '/test_path_to_volume_id';
+my $scfg = {
+'digest' => 'd29306346b8b25b90a4a96165f1e8f52d1af1eda',
+'ids' => {
+   'local' => {
+   'shared' => 0,
+   'path' => "$storage_dir",
+   'type' => 'dir',
+   'content' => {
+   'snippets' => 1,
+   'rootdir' => 1,
+   'images' => 1,
+   'iso' => 1,
+   'backup' => 1,
+   'vztmpl' => 1
+   },
+   'maxfiles' => 0
+   }
+},
+'order' => {
+   'local' => 1
+}
+};
+
+my @tests = (
+   [ "$storage_dir/images/16110/vm-16110-disk-0.qcow2", ['images', 
'local:16110/vm-16110-disk-0.qcow2'], 'Image, qcow2' ],
+   [ "$storage_dir/images/16112/vm-16112-disk-0.raw",   ['images', 
'local:16112/vm-16112-disk-0.raw'],   'Image, raw' ],
+   [ "$storage_dir/images/9004/base-9004-disk-0.qcow2", ['images', 
'local:9004/base-9004-disk-0.qcow2'], 'Image template, qcow2' ],
+
+   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz",  
['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz'],  'Backup, 
vma.gz' ],
+   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo", 
['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo'], 'Backup, 
vma.lzo' ],
+   [ "$storage_dir/dump/vzdump-qemu-16110-2020_03_30-21_13_55.vma", 
['iso', 'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma'], 'Backup, 
vma' ],
+   [ "$storage_dir/dump/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo",  
['iso', 'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo'],  'Backup, 
tar.lzo' ],
+
+   [ "$storage_dir/template/iso/yet-again-a-installation-disk.iso",
  ['iso', 'local:iso/yet-again-a-installation-disk.iso'],  'ISO 
file' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz", 
['vztmpl', 'local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz'], 'CT 
template, tar.gz' ],
+
+   [ "$storage_dir/private/1234/",  ['rootdir', 
'local:rootdir/1234'],  'Rootdir' ], # fileparse needs / at the 
end
+   [ "$storage_dir/images/1234/subvol-1234-disk-0.subvol/", ['images', 
'local:1234/subvol-1234-disk-0.subvol'], 'Rootdir, folder subvol' ], # 
fileparse needs / at the end
+
+   # no matches
+   [ "$storage_dir/snippets/userconfig.yaml",  
  [''], 'Snippets, yaml' ],
+   [ "$storage_dir/snippets/hookscript.pl",
  [''], 'Snippets, hookscript' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.xz", [''], 
'CT template, tar.xz' ],
+
+   # no matches, path or files with failures
+   [ "$storage_dir/images//base-4321-disk-0.raw",  
   [''], 'Base template, string as vmid in folder name' ],
+   [ "$storage_dir/template/iso/yet-again-a-installation-disk.dvd",
   [''], 'ISO file, wrong ending' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.zip.gz",  [''], 
'CT template, wrong ending, zip.gz' ],
+   [ 
"$storage_dir/template/cache/debian-10.0-standard_10.0-1_amd64.tar.bz2", [''], 
'CT template, wrong ending, tar bz2' ],
+   [ "$storage_dir/private/subvol-19254-disk-0/",  
   [''], 'Rootdir as subvol, wrong path' ],
+   [ "$storage_dir/dump/vzdump-openvz-16112-2020_03_30-21_39_30.tar.bz2",  
   [''], 'Backup, wrong ending, openvz, tar.bz2' ],
+   [ "$storage_dir/dump/vzdump-openvz-16112-2020_03_30-21_39_30.zip.gz",   
   [''], 'Backup, wrong format, openvz, zip.gz' ],
+   [ "$storage_dir/dump/vzdump-openvz-16112-2020_03_30-21_39_30.tgz.lzo",  
   [''], 'Backup, wrong format, openvz, tgz.lzo' ],
+   [ 

[pve-devel] [PATCH storage v3 1/7] storage: test: split archive format/compressor

2020-04-08 Thread Alwin Antreich
detection into separate functions so they are reusable and easier
modifiable.

Signed-off-by: Alwin Antreich 
---
 test/Makefile |  5 ++-
 PVE/Storage.pm| 79 ---
 test/run_parser_tests.pl  | 12 ++
 test/test_archive_info.pm | 54 ++
 4 files changed, 128 insertions(+), 22 deletions(-)
 create mode 100755 test/run_parser_tests.pl
 create mode 100644 test/test_archive_info.pm

diff --git a/test/Makefile b/test/Makefile
index 833a597..838449f 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -1,6 +1,6 @@
 all: test
 
-test: test_zfspoolplugin test_disklist test_bwlimit
+test: test_zfspoolplugin test_disklist test_bwlimit test_parsers
 
 test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
@@ -10,3 +10,6 @@ test_disklist: run_disk_tests.pl
 
 test_bwlimit: run_bwlimit_tests.pl
./run_bwlimit_tests.pl
+
+test_parsers: run_parser_tests.pl
+   ./run_parser_tests.pl
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 60b8310..0bbd168 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1284,6 +1284,53 @@ sub foreach_volid {
 }
 }
 
+sub decompressor_info {
+my ($format, $comp) = @_;
+
+if ($format eq 'tgz' && !defined($comp)) {
+   ($format, $comp) = ('tar', 'gz');
+}
+
+my $decompressor = {
+   tar => {
+   gz => ['tar', '-z'],
+   lzo => ['tar', '--lzop'],
+   },
+   vma => {
+   gz => ['zcat'],
+   lzo => ['lzop', '-d', '-c'],
+   },
+};
+
+die "ERROR: archive format not defined\n"
+   if !defined($decompressor->{$format});
+
+my $decomp = $decompressor->{$format}->{$comp} if $comp;
+
+my $info = {
+   format => $format,
+   compression => $comp,
+   decompressor => $decomp,
+};
+
+return $info;
+}
+
+sub archive_info {
+my ($archive) = shift;
+my $info;
+
+my $volid = basename($archive);
+if ($volid =~ 
/vzdump-(lxc|openvz|qemu)-\d+-(?:\d{4})_(?:\d{2})_(?:\d{2})-(?:\d{2})_(?:\d{2})_(?:\d{2})\.(tgz$|tar|vma)(?:\.(gz|lzo))?$/)
 {
+   $info = decompressor_info($2, $3);
+   $info->{type} = $1;
+} else {
+   die "ERROR: couldn't determine format and compression type\n";
+}
+
+return $info;
+}
+
 sub extract_vzdump_config_tar {
 my ($archive, $conf_re) = @_;
 
@@ -1329,16 +1376,12 @@ sub extract_vzdump_config_vma {
 };
 
 
+my $info = archive_info($archive);
+$comp //= $info->{compression};
+my $decompressor = $info->{decompressor};
+
 if ($comp) {
-   my $uncomp;
-   if ($comp eq 'gz') {
-   $uncomp = ["zcat", $archive];
-   } elsif ($comp eq 'lzo') {
-   $uncomp = ["lzop", "-d", "-c", $archive];
-   } else {
-   die "unknown compression method '$comp'\n";
-   }
-   $cmd = [$uncomp, ["vma", "config", "-"]];
+   $cmd = [ [@$decompressor, $archive], ["vma", "config", "-"] ];
 
# in some cases, lzop/zcat exits with 1 when its stdout pipe is
# closed early by vma, detect this and ignore the exit code later
@@ -1388,20 +1431,14 @@ sub extract_vzdump_config {
 }
 
 my $archive = abs_filesystem_path($cfg, $volid);
+my $info = archive_info($archive);
+my $format = $info->{format};
+my $comp = $info->{compression};
+my $type = $info->{type};
 
-if ($volid =~ 
/vzdump-(lxc|openvz)-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|(tar(\.(gz|lzo))?))$/)
 {
+if ($type eq 'lxc' || $type eq 'openvz') {
return extract_vzdump_config_tar($archive, 
qr!^(\./etc/vzdump/(pct|vps)\.conf)$!);
-} elsif ($volid =~ 
/vzdump-qemu-\d+-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?))$/)
 {
-   my $format;
-   my $comp;
-   if ($7 eq 'tgz') {
-   $format = 'tar';
-   $comp = 'gz';
-   } else {
-   $format = $9;
-   $comp = $11 if defined($11);
-   }
-
+} elsif ($type eq 'qemu') {
if ($format eq 'tar') {
return extract_vzdump_config_tar($archive, 
qr!\(\./qemu-server\.conf\)!);
} else {
diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
new file mode 100755
index 000..042112c
--- /dev/null
+++ b/test/run_parser_tests.pl
@@ -0,0 +1,12 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use TAP::Harness;
+
+my $harness = TAP::Harness->new( { verbosity => -1 });
+my $res = $harness->runtests("test_archive_info.pm");
+
+exit -1 if !$res || $res->{failed} || $res->{parse_errors};
+
diff --git a/test/test_archive_info.pm b/test/test_archive_info.pm
new file mode 100644
index 000..464cc89
--- /dev/null
+++ b/test/test_archive_info.pm
@@ -0,0 +1,54 @@
+package PVE::Storage::TestArchiveInfo;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my @tests = (
+# backup archives
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma', { 'type' => 
'qemu', 

[pve-devel] [PATCH manager v3] Fix #2124: Add support for zstd

2020-04-08 Thread Alwin Antreich
This patch adds zstd to the compression selection for backup on the GUI
and add .zst to the backup file filter. Including zstd as package
install dependency.

Signed-off-by: Alwin Antreich 
---
 PVE/VZDump.pm| 11 +--
 debian/control   |  1 +
 www/manager6/form/CompressionSelector.js |  3 ++-
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index f3274196..e97bd817 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -609,6 +609,13 @@ sub compressor_info {
} else {
return ('gzip --rsyncable', 'gz');
}
+} elsif ($opt_compress eq 'zstd') {
+   my $zstd_threads = $opts->{zstd} // 1;
+   if ($zstd_threads == 0) {
+   my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
+   $zstd_threads = int(($cpuinfo->{cpus} + 1)/2);
+   }
+   return ("zstd --threads=${zstd_threads}", 'zst');
 } else {
die "internal error - unknown compression option '$opt_compress'";
 }
@@ -620,7 +627,7 @@ sub get_backup_file_list {
 my $bklist = [];
 foreach my $fn (<$dir/${bkname}-*>) {
next if $exclude_fn && $fn eq $exclude_fn;
-   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(gz|lzo))?)))$!)
 {
+   if ($fn =~ 
m!/(${bkname}-(\d{4})_(\d{2})_(\d{2})-(\d{2})_(\d{2})_(\d{2})\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?)))$!)
 {
$fn = "$dir/$1"; # untaint
my $t = timelocal ($7, $6, $5, $4, $3 - 1, $2);
push @$bklist, [$fn, $t];
@@ -928,7 +935,7 @@ sub exec_backup_task {
debugmsg ('info', "delete old backup '$d->[0]'", $logfd);
unlink $d->[0];
my $logfn = $d->[0];
-   $logfn =~ s/\.(tgz|((tar|vma)(\.(gz|lzo))?))$/\.log/;
+   $logfn =~ 
s/\.(tgz|((tar|vma)(\.(${\PVE::Storage::Plugin::COMPRESSOR_RE}))?))$/\.log/;
unlink $logfn;
}
}
diff --git a/debian/control b/debian/control
index ec5267a4..4ba05c6f 100644
--- a/debian/control
+++ b/debian/control
@@ -60,6 +60,7 @@ Depends: apt-transport-https | apt (>= 1.5~),
  logrotate,
  lsb-base,
  lzop,
+ zstd,
  novnc-pve,
  pciutils,
  perl (>= 5.10.0-19),
diff --git a/www/manager6/form/CompressionSelector.js 
b/www/manager6/form/CompressionSelector.js
index 8938fc0e..e8775e71 100644
--- a/www/manager6/form/CompressionSelector.js
+++ b/www/manager6/form/CompressionSelector.js
@@ -4,6 +4,7 @@ Ext.define('PVE.form.CompressionSelector', {
 comboItems: [
 ['0', Proxmox.Utils.noneText],
 ['lzo', 'LZO (' + gettext('fast') + ')'],
-['gzip', 'GZIP (' + gettext('good') + ')']
+['gzip', 'GZIP (' + gettext('good') + ')'],
+['zstd', 'ZSTD (' + gettext('better') + ')']
 ]
 });
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 4/7] Fix: backup: ctime taken from stat

2020-04-08 Thread Alwin Antreich
not the file name. The vzdump file was passed with the full path to the
regex. That regex captures the time from the file name, to calculate the
epoch.

As the regex didn't match, the ctime from stat was taken instead. This
resulted in the ctime shown when the file was changed, not when the
backup was made.

Signed-off-by: Alwin Antreich 
---
 PVE/Storage/Plugin.pm |  3 ++-
 test/test_list_volumes.pm | 16 
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 8c0dae1..0ab44ce 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -952,7 +952,8 @@ my $get_subdir_files = sub {
next if $fn !~ 
m!/([^/]+\.(tar|tar\.gz|tar\.lzo|tgz|vma|vma\.gz|vma\.lzo))$!;
 
my $format = $2;
-   $info = { volid => "$sid:backup/$1", format => $format };
+   $fn = $1;
+   $info = { volid => "$sid:backup/$fn", format => $format };
 
if ($fn =~ 
m!^vzdump\-(?:lxc|qemu)\-(?:[1-9][0-9]{2,8})\-(\d{4})_(\d{2})_(\d{2})\-(\d{2})_(\d{2})_(\d{2})\.${format}$!)
 {
my $epoch = timelocal($6, $5, $4, $3, $2-1, $1 - 1900);
diff --git a/test/test_list_volumes.pm b/test/test_list_volumes.pm
index 4b3fdb7..169c8be 100644
--- a/test/test_list_volumes.pm
+++ b/test/test_list_volumes.pm
@@ -106,9 +106,9 @@ my @tests = (
{ 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-0.qcow2' },
{ 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-1.qcow2' },
{ 'content' => 'images',   'ctime' => DEFAULT_CTIME, 'format' => 
'qcow2', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16110', 'volid' => 'local:16110/vm-16110-disk-2.qcow2' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'vma', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma' },
+   { 'content' => 'backup',   'ctime' => 1585595500,'format' => 
'vma.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_11_40.vma.gz' },
+   { 'content' => 'backup',   'ctime' => 1585595565,'format' => 
'vma.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_12_45.vma.lzo' },
+   { 'content' => 'backup',   'ctime' => 1585595635,'format' => 
'vma', 'size' => DEFAULT_SIZE, 'vmid' => '16110', 'volid' => 
'local:backup/vzdump-qemu-16110-2020_03_30-21_13_55.vma' },
{ 'content' => 'snippets', 'ctime' => DEFAULT_CTIME, 'format' => 
'snippet', 'size' => DEFAULT_SIZE, 'volid' => 'local:snippets/hookscript.pl' },
{ 'content' => 'snippets', 'ctime' => DEFAULT_CTIME, 'format' => 
'snippet', 'size' => DEFAULT_SIZE, 'volid' => 'local:snippets/userconfig.yaml' 
},
],
@@ -124,9 +124,9 @@ my @tests = (
],
expected => [
{ 'content' => 'rootdir',  'ctime' => DEFAULT_CTIME, 'format' => 
'raw', 'parent' => undef, 'size' => DEFAULT_SIZE, 'used' => DEFAULT_USED, 
'vmid' => '16112', 'volid' => 'local:16112/vm-16112-disk-0.raw' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'tar.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'tar.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_49_30.tar.gz' },
-   { 'content' => 'backup',   'ctime' => DEFAULT_CTIME, 'format' => 
'tgz', 'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_59_30.tgz' },
+   { 'content' => 'backup',   'ctime' => 1585597170,'format' => 
'tar.lzo', 'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo' },
+   { 'content' => 'backup',   'ctime' => 1585597770,'format' => 
'tar.gz',  'size' => DEFAULT_SIZE, 'vmid' => '16112', 'volid' => 
'local:backup/vzdump-lxc-16112-2020_03_30-21_49_30.tar.gz' },
+   { 'content' => 'backup',   'ctime' => 1585598370,'format' => 

[pve-devel] [PATCH qemu-server v3 2/2] Fix #2124: Add support for zstd

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 PVE/QemuServer.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e5bf41b..c72ddf4 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7039,7 +7039,7 @@ sub complete_backup_archives {
 my $res = [];
 foreach my $id (keys %$data) {
foreach my $item (@{$data->{$id}}) {
-   next if $item->{format} !~ m/^vma\.(gz|lzo)$/;
+   next if $item->{format} !~ 
m/^vma\.(${\PVE::Storage::Plugin::COMPRESSOR_RE})$/;
push @$res, $item->{volid} if defined($item->{volid});
}
 }
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage v3 2/7] test: parse_volname

2020-04-08 Thread Alwin Antreich
Test to reduce the potential for accidental breakage on regex changes.

Signed-off-by: Alwin Antreich 
---
 test/run_parser_tests.pl   |  2 +-
 test/test_parse_volname.pm | 95 ++
 2 files changed, 96 insertions(+), 1 deletion(-)
 create mode 100644 test/test_parse_volname.pm

diff --git a/test/run_parser_tests.pl b/test/run_parser_tests.pl
index 042112c..79093aa 100755
--- a/test/run_parser_tests.pl
+++ b/test/run_parser_tests.pl
@@ -6,7 +6,7 @@ use warnings;
 use TAP::Harness;
 
 my $harness = TAP::Harness->new( { verbosity => -1 });
-my $res = $harness->runtests("test_archive_info.pm");
+my $res = $harness->runtests("test_archive_info.pm", "test_parse_volname.pm");
 
 exit -1 if !$res || $res->{failed} || $res->{parse_errors};
 
diff --git a/test/test_parse_volname.pm b/test/test_parse_volname.pm
new file mode 100644
index 000..84665d3
--- /dev/null
+++ b/test/test_parse_volname.pm
@@ -0,0 +1,95 @@
+package PVE::Storage::TestParseVolname;
+
+use strict;
+use warnings;
+
+use lib qw(..);
+
+use PVE::Storage;
+use Test::More;
+
+my @tests = (
+# VM images
+[ '4321/base-4321-disk-0.raw/1234/vm-1234-disk-0.raw', ['images', 
'vm-1234-disk-0.raw',   '1234', 'base-4321-disk-0.raw',   '4321', undef, 
'raw'],'VM disk image, linked, raw' ],
+[ '4321/base-4321-disk-0.qcow2/1234/vm-1234-disk-0.qcow2', ['images', 
'vm-1234-disk-0.qcow2', '1234', 'base-4321-disk-0.qcow2', '4321', undef, 
'qcow2'],  'VM disk image, linked, qcow2' ],
+[ '4321/base-4321-disk-0.vmdk/1234/vm-1234-disk-0.vmdk',   ['images', 
'vm-1234-disk-0.vmdk',  '1234', 'base-4321-disk-0.vmdk',  '4321', undef, 
'vmdk'],   'VM disk image, linked, vmdk' ],
+
+[ '4321/vm-4321-disk-0.qcow2/1234/vm-1234-disk-0.qcow2',['images', 
'vm-1234-disk-0.qcow2', '1234', 'vm-4321-disk-0.qcow2', '4321', undef, 
'qcow2'], 'VM disk image, linked, qcow2, vm- as base-' ],
+
+[ '1234/vm-1234-disk-1.raw',   ['images', 'vm-1234-disk-1.raw',   '1234', 
undef, undef, undef, 'raw'],   'VM disk image, raw' ],
+[ '1234/vm-1234-disk-1.qcow2', ['images', 'vm-1234-disk-1.qcow2', '1234', 
undef, undef, undef, 'qcow2'], 'VM disk image, qcow2' ],
+[ '1234/vm-1234-disk-1.vmdk',  ['images', 'vm-1234-disk-1.vmdk',  '1234', 
undef, undef, undef, 'vmdk'],  'VM disk image, vmdk' ],
+
+[ '4321/base-4321-disk-0.raw',   ['images', 'base-4321-disk-0.raw',   
'4321', undef, undef, 'base-', 'raw'],   'VM disk image, base, raw' ],
+[ '4321/base-4321-disk-0.qcow2', ['images', 'base-4321-disk-0.qcow2', 
'4321', undef, undef, 'base-', 'qcow2'], 'VM disk image, base, qcow2' ],
+[ '4321/base-4321-disk-0.vmdk',  ['images', 'base-4321-disk-0.vmdk',  
'4321', undef, undef, 'base-', 'vmdk'],  'VM disk image, base, vmdk' ],
+
+# iso
+[ 'iso/some-installation-disk.iso', ['iso', 'some-installation-disk.iso'], 
'ISO image, iso' ],
+[ 'iso/some-other-installation-disk.img', ['iso', 
'some-other-installation-disk.img'], 'ISO image, img' ],
+
+# container templates
+[ 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz', ['vztmpl', 
'debian-10.0-standard_10.0-1_amd64.tar.gz'], 'Container template tar.gz' ],
+[ 'vztmpl/debian-10.0-standard_10.0-1_amd64.tar.xz', ['vztmpl', 
'debian-10.0-standard_10.0-1_amd64.tar.xz'], 'Container template tar.xz' ],
+
+# container rootdir
+[ 'rootdir/1234',   ['rootdir', '1234',
  '1234'], 'Container rootdir, sub directory' ],
+[ '1234/subvol-1234-disk-0.subvol', ['images',  
'subvol-1234-disk-0.subvol', '1234', undef, undef, undef, 'subvol'],  
'Container rootdir, subvol' ],
+
+# backup archives
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma', ['backup', 
'vzdump-qemu-16110-2020_03_30-21_12_40.vma', '16110'], 'Backup archive, 
vma' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  ['backup', 
'vzdump-qemu-16110-2020_03_30-21_12_40.vma.gz',  '16110'], 'Backup archive, 
vma, gz' ],
+[ 'backup/vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', ['backup', 
'vzdump-qemu-16110-2020_03_30-21_12_40.vma.lzo', '16110'], 'Backup archive, 
vma, lzo' ],
+
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar', ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tar', '16112'], 'Backup archive, lxc' 
],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.gz',  ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tar.gz',  '16112'], 'Backup archive, lxc, 
gz' ],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tgz', ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tgz', '16112'], 'Backup archive, lxc, 
tgz' ],
+[ 'backup/vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo', ['backup', 
'vzdump-lxc-16112-2020_03_30-21_39_30.tar.lzo', '16112'], 'Backup archive, lxc, 
lzo' ],
+
+[ 'backup/vzdump-openvz-16112-2020_03_30-21_39_30.tar', ['backup', 
'vzdump-openvz-16112-2020_03_30-21_39_30.tar', '16112'], 'Backup archive, 
openvz' ],
+[ 

[pve-devel] [PATCH container v3] Fix: #2124 add zstd

2020-04-08 Thread Alwin Antreich
Signed-off-by: Alwin Antreich 
---
 src/PVE/LXC/Create.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 9faec63..91904b6 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -123,6 +123,7 @@ sub restore_tar_archive {
'.bz2' => '-j',
'.xz'  => '-J',
'.lzo'  => '--lzop',
+   '.zst'  => '--zstd',
);
if ($archive =~ /\.tar(\.[^.]+)?$/) {
if (defined($1)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 18/19] cleanup_remotedisks: also include those migrated with storage_migrate

2020-04-08 Thread Fabian Ebner
Call cleanup_remotedisks in phase1_cleanup as well, because that's where
we end if sync_disks fails and some disks might already have been
transfered successfully.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 30 ++
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0a6277d..68eb083 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -577,16 +577,32 @@ sub sync_disks {
 sub cleanup_remotedisks {
 my ($self) = @_;
 
+my $cleanup_volids = {};
+
+foreach my $volid (values %{$self->{volume_map}}) {
+   $cleanup_volids->{$volid} = 1;
+}
+
 foreach my $target_drive (keys %{$self->{target_drive}}) {
my $drivestr = $self->{target_drive}->{$target_drive}->{drivestr};
next if !defined($drivestr);
 
my $drive = PVE::QemuServer::parse_drive($target_drive, $drivestr);
+   my $volid = $drive->{file};
+   next if !defined($volid);
 
-   # don't clean up replicated disks!
-   next if defined($self->{replicated_volumes}->{$drive->{file}});
+   if (defined($self->{replicated_volumes}->{$volid})) {
+   # don't clean up replicated disks!
+   delete $cleanup_volids->{$volid};
+   } else {
+   # volume_map might not yet contain all of the live-migrating volumes
+   # if there's an error while starting the drive-mirror jobs
+   $cleanup_volids->{$volid} = 1;
+   }
+}
 
-   my ($storeid, $volname) = PVE::Storage::parse_volume_id($drive->{file});
+foreach my $volid (keys %{$cleanup_volids}) {
+   my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
 
my $cmd = [@{$self->{rem_ssh}}, 'pvesm', 'free', "$storeid:$volname"];
 
@@ -638,11 +654,9 @@ sub phase1_cleanup {
$self->log('err', $err);
 }
 
-if ($self->{volumes}) {
-   foreach my $volid (@{$self->{volumes}}) {
-   $self->log('err', "found stale volume copy '$volid' on node 
'$self->{node}'");
-   # fixme: try to remove ?
-   }
+eval { $self->cleanup_remotedisks() };
+if (my $err = $@) {
+   $self->log('err', $err);
 }
 
 eval { $self->cleanup_bitmaps() };
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 storage 19/19] storage_migrate: add volname_for_storage helper

2020-04-08 Thread Fabian Ebner
to guess a valid volname for a targetstorage of a different type.
This makes it possible to migrate raw volumes between 'dir' and 'lvm'
storages.

It is only used when the storage type for the source storage X
and target storage Y differ and should work as long as Y uses
the standard naming scheme (VMID/vm-VMID-name.fmt respectively vm-VMID-name).
If it doesn't, we get an invalid name and fail, which is the old
behavior (except if X and Y have different types but the same
non-standard naming-scheme, where the old behavior did work).

The original name is preserved, except for a possible extension
and it's also checked if the format is valid for the target storage.
Example: mylvm:vm-123-disk-4 <-> mydir:123/vm-123-disk-4.raw

Signed-off-by: Fabian Ebner 
---
 PVE/Storage.pm | 29 -
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index bdafdc1..4863d84 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -562,6 +562,25 @@ sub abs_filesystem_path {
 return $path;
 }
 
+my $volname_for_storage = sub {
+my ($cfg, $volid, $target_storeid) = @_;
+
+my (undef, $name, $vmid, undef, undef, undef, $format) = 
parse_volname($cfg, $volid);
+my $target_scfg = storage_config($cfg, $target_storeid);
+
+my (undef, $valid_formats) = 
PVE::Storage::Plugin::default_format($target_scfg);
+my $format_is_valid = grep { $_ eq $format } @$valid_formats;
+die "unsupported format '$format' for storage type $target_scfg->{type}\n" 
if !$format_is_valid;
+
+(my $name_without_extension = $name) =~ s/\.$format$//;
+
+if ($target_scfg->{path}) {
+   return "$vmid/$name_without_extension.$format";
+} else {
+   return "$name_without_extension";
+}
+};
+
 sub storage_migrate {
 my ($cfg, $volid, $target_sshinfo, $target_storeid, $opts, $logfunc) = @_;
 
@@ -573,7 +592,6 @@ sub storage_migrate {
 my $allow_rename = $opts->{allow_rename} ? 1 : 0;
 
 my ($storeid, $volname) = parse_volume_id($volid);
-my $target_volname = $opts->{target_volname} || $volname;
 
 my $scfg = storage_config($cfg, $storeid);
 
@@ -587,6 +605,15 @@ sub storage_migrate {
 die "content type '$vtype' is not available on storage '$target_storeid'\n"
if !$tcfg->{content}->{$vtype};
 
+my $target_volname;
+if ($opts->{target_volname}) {
+   $target_volname = $opts->{target_volname};
+} elsif ($scfg->{type} eq $tcfg->{type}) {
+   $target_volname = $volname;
+} else {
+   $target_volname = $volname_for_storage->($cfg, $volid, $target_storeid);
+}
+
 my $target_volid = "${target_storeid}:${target_volname}";
 
 my $target_ip = $target_sshinfo->{ip};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 storage 10/19] Introduce allow_rename parameter for pvesm import and storage_migrate

2020-04-08 Thread Fabian Ebner
and also return the ID of the allocated volume. This option
allows plugins to choose a new name if there is a collision.

In storage_migrate, the API version of the receiving side is checked.

In Storage.pm's volume_import, when a plugin returns 'undef',
it can be assumed that the import with the requested volid was
successful (it should've died otherwise) and so volid is returned.
This is done for backwards compatibility with foreign plugins.

Signed-off-by: Fabian Ebner 
---
 PVE/CLI/pvesm.pm | 22 +++
 PVE/Storage.pm   | 55 +++-
 PVE/Storage/LVMPlugin.pm | 17 +++
 PVE/Storage/Plugin.pm| 16 +++
 PVE/Storage/ZFSPoolPlugin.pm | 13 +
 5 files changed, 88 insertions(+), 35 deletions(-)

diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index 0d1d816..30bdcf6 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -321,9 +321,16 @@ __PACKAGE__->register_method ({
maxLength => 80,
optional => 1,
},
+   'allow-rename' => {
+   description => "Choose a new volume ID if the requested " .
+ "volume ID already exists, instead of throwing an error.",
+   type => 'boolean',
+   optional => 1,
+   default => 0,
+   },
},
 },
-returns => { type => 'null' },
+returns => { type => 'string' },
 code => sub {
my ($param) = @_;
 
@@ -376,11 +383,11 @@ __PACKAGE__->register_method ({
my $cfg = PVE::Storage::config();
my $volume = $param->{volume};
my $delete = $param->{'delete-snapshot'};
-   PVE::Storage::volume_import($cfg, $infh, $volume, $param->{format},
-   $param->{base}, $param->{'with-snapshots'});
-   PVE::Storage::volume_snapshot_delete($cfg, $volume, $delete)
+   my $imported_volid = PVE::Storage::volume_import($cfg, $infh, $volume, 
$param->{format},
+   $param->{base}, $param->{'with-snapshots'}, 
$param->{'allow-rename'});
+   PVE::Storage::volume_snapshot_delete($cfg, $imported_volid, $delete)
if defined($delete);
-   return;
+   return $imported_volid;
 }
 });
 
@@ -801,7 +808,10 @@ our $cmddef = {
 path => [ __PACKAGE__, 'path', ['volume']],
 extractconfig => [__PACKAGE__, 'extractconfig', ['volume']],
 export => [ __PACKAGE__, 'export', ['volume', 'format', 'filename']],
-import => [ __PACKAGE__, 'import', ['volume', 'format', 'filename']],
+import => [ __PACKAGE__, 'import', ['volume', 'format', 'filename'], {}, 
sub  {
+   my $volid = shift;
+   print PVE::Storage::volume_imported_message($volid);
+}],
 apiinfo => [ __PACKAGE__, 'apiinfo', [], {}, sub {
my $res = shift;
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 7b285da..bdafdc1 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -39,11 +39,11 @@ use PVE::Storage::DRBDPlugin;
 use PVE::Storage::PBSPlugin;
 
 # Storage API version. Icrement it on changes in storage API interface.
-use constant APIVER => 4;
+use constant APIVER => 5;
 # Age is the number of versions we're backward compatible with.
 # This is like having 'current=APIVER' and age='APIAGE' in libtool,
 # see 
https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 3;
+use constant APIAGE => 4;
 
 # load standard plugins
 PVE::Storage::DirPlugin->register();
@@ -578,7 +578,7 @@ sub storage_migrate {
 my $scfg = storage_config($cfg, $storeid);
 
 # no need to migrate shared content
-return if $storeid eq $target_storeid && $scfg->{shared};
+return $volid if $storeid eq $target_storeid && $scfg->{shared};
 
 my $tcfg = storage_config($cfg, $target_storeid);
 
@@ -616,6 +616,11 @@ sub storage_migrate {
$import_fn = "tcp://$net";
 }
 
+my $target_apiver = 1; # if there is no apiinfo call, assume 1
+my $get_api_version = [@$ssh, 'pvesm', 'apiinfo'];
+my $match_api_version = sub { $target_apiver = $1 if $_[0] =~ m!^APIVER 
(\d+)$!; };
+eval { run_command($get_api_version, logfunc => $match_api_version); };
+
 my $send = ['pvesm', 'export', $volid, $format, '-', '-with-snapshots', 
$with_snapshots];
 my $recv = [@$ssh, '--', 'pvesm', 'import', $target_volid, $format, 
$import_fn, '-with-snapshots', $with_snapshots];
 if (defined($snapshot)) {
@@ -624,6 +629,7 @@ sub storage_migrate {
 if ($migration_snapshot) {
push @$recv, '-delete-snapshot', $snapshot;
 }
+push @$recv, '-allow-rename', $allow_rename if $target_apiver >= 5;
 
 if (defined($base_snapshot)) {
# Check if the snapshot exists on the remote side:
@@ -631,6 +637,19 @@ sub storage_migrate {
push @$recv, '-base', $base_snapshot;
 }
 
+my $new_volid;
+my $pattern = volume_imported_message(undef, 1);
+my $match_volid_and_log = sub {
+   my $line = shift;
+
+   $new_volid = $1 if ($line =~ 

[pve-devel] [PATCH v5 qemu-server 16/19] sync_disks: be more verbose if storage_migrate fails

2020-04-08 Thread Fabian Ebner
If storage_migrate dies, the error message might not include the
volume ID or the target storage ID, but those might be good to know.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a048fcb..5ed953a 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -557,8 +557,13 @@ sub sync_disks {
'allow_rename' => !$local_volumes->{$volid}->{is_vmstate},
};
 
-   my $new_volid = PVE::Storage::storage_migrate($storecfg, 
$volid, $self->{ssh_info},
- $targetsid, 
$storage_migrate_opts);
+   my $new_volid = eval {
+   PVE::Storage::storage_migrate($storecfg, $volid, 
$self->{ssh_info},
+ $targetsid, 
$storage_migrate_opts);
+   };
+   if (my $err = $@) {
+   die "storage migration for '$volid' to storage '$targetsid' 
failed - $err\n";
+   }
 
$self->{volume_map}->{$volid} = $new_volid;
$self->log('info', "volume '$volid' is '$new_volid' on the 
target\n");
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 15/19] sync_disks: use allow_rename to avoid collisions on the target storage

2020-04-08 Thread Fabian Ebner
This makes it possible to migrate a VM with volumes store1:vm-123-disk-0
store2:vm-123-disk-0 to some targetstorage. Also prevents migration failure
when there is an orphaned disk with the same volid on the target.

To avoid confusion, the name should not change for 'vmstate'-volumes.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 3 +++
 PVE/QemuServer.pm  | 1 +
 2 files changed, 4 insertions(+)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index eb78537..a048fcb 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -389,6 +389,8 @@ sub sync_disks {
 
$local_volumes->{$volid}->{ref} = $attr->{referenced_in_config} ? 
'config' : 'snapshot';
 
+   $local_volumes->{$volid}->{is_vmstate} = $attr->{is_vmstate} ? 1 : 
0;
+
if ($attr->{cdrom}) {
if ($volid =~ /vm-\d+-cloudinit/) {
$local_volumes->{$volid}->{ref} = 'generated';
@@ -552,6 +554,7 @@ sub sync_disks {
'bwlimit' => $bwlimit,
'insecure' => $opts->{migration_type} eq 'insecure',
'with_snapshots' => $local_volumes->{$volid}->{snapshots},
+   'allow_rename' => !$local_volumes->{$volid}->{is_vmstate},
};
 
my $new_volid = PVE::Storage::storage_migrate($storecfg, 
$volid, $self->{ssh_info},
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index cd34bf6..0f962dd 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4325,6 +4325,7 @@ sub foreach_volid {
 foreach my $snapname (keys %{$conf->{snapshots}}) {
my $snap = $conf->{snapshots}->{$snapname};
$test_volid->($snap->{vmstate}, 0, 1, $snapname);
+   $volhash->{$snap->{vmstate}}->{is_vmstate} = 1 if $snap->{vmstate};
PVE::QemuConfig->foreach_volume($snap, sub {
my ($ds, $drive) = @_;
$test_volid->($drive->{file}, drive_is_cdrom($drive), 
$drive->{replicate} // 1, $drive->{shared}, $snapname);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 storage 09/19] Add apiinfo helper to pvesm

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/CLI/pvesm.pm | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index 510faba..0d1d816 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -47,6 +47,30 @@ sub setup_environment {
 PVE::RPCEnvironment->setup_default_cli_env();
 }
 
+__PACKAGE__->register_method ({
+name => 'apiinfo',
+path => 'apiinfo',
+method => 'GET',
+description => "Returns APIVER and APIAGE.",
+parameters => {
+   additionalProperties => 0,
+   properties => {},
+},
+returns => {
+   type => 'object',
+   properties => {
+   apiver => { type => 'integer' },
+   apiage => { type => 'integer' },
+   },
+},
+code => sub {
+   return {
+   apiver => PVE::Storage::APIVER,
+   apiage => PVE::Storage::APIAGE,
+   };
+}
+});
+
 __PACKAGE__->register_method ({
 name => 'path',
 path => 'path',
@@ -778,6 +802,12 @@ our $cmddef = {
 extractconfig => [__PACKAGE__, 'extractconfig', ['volume']],
 export => [ __PACKAGE__, 'export', ['volume', 'format', 'filename']],
 import => [ __PACKAGE__, 'import', ['volume', 'format', 'filename']],
+apiinfo => [ __PACKAGE__, 'apiinfo', [], {}, sub {
+   my $res = shift;
+
+   print "APIVER $res->{apiver}\n";
+   print "APIAGE $res->{apiage}\n";
+}],
 };
 
 1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 guest-common 05/19] Use new storage_migrate interface

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/Replication.pm | 12 ++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index ae1ade4..5b1e917 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -186,8 +186,16 @@ sub replicate_volume {
 my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
 
 my $ratelimit_bps = int(100*$rate) if $rate;
-PVE::Storage::storage_migrate($storecfg, $volid, $ssh_info, $storeid, 
$volname,
- $base_snapshot, $sync_snapname, 
$ratelimit_bps, $insecure, 1, $logfunc);
+my $opts = {
+   'target_volname' => $volname,
+   'base_snapshot' => $base_snapshot,
+   'snapshot' => $sync_snapname,
+   'ratelimit_bps' => $ratelimit_bps,
+   'insecure' => $insecure,
+   'with_snapshots' => 1,
+};
+
+PVE::Storage::storage_migrate($storecfg, $volid, $ssh_info, $storeid, 
$opts, $logfunc);
 }
 
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 storage 04/19] Collect optional parameters for storage_migrate into $opts

2020-04-08 Thread Fabian Ebner
Sanitizing $with_snapshots is done on extraction to save a line.

Signed-off-by: Fabian Ebner 
---
 PVE/API2/Storage/Content.pm |  2 +-
 PVE/Storage.pm  | 12 +---
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Storage/Content.pm b/PVE/API2/Storage/Content.pm
index 63fa4fc..f2e3e57 100644
--- a/PVE/API2/Storage/Content.pm
+++ b/PVE/API2/Storage/Content.pm
@@ -418,7 +418,7 @@ __PACKAGE__->register_method ({
# you need to get this working (fails currently, because 
storage_migrate() uses
# ssh to connect to local host (which is not needed
my $sshinfo = PVE::SSHInfo::get_ssh_info($target_node);
-   PVE::Storage::storage_migrate($cfg, $src_volid, $sshinfo, 
$target_sid, $target_volname);
+   PVE::Storage::storage_migrate($cfg, $src_volid, $sshinfo, 
$target_sid, {'target_volname' => $target_volname});
 
print "DEBUG: end worker $upid\n";
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 60b8310..7b285da 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -563,10 +563,17 @@ sub abs_filesystem_path {
 }
 
 sub storage_migrate {
-my ($cfg, $volid, $target_sshinfo, $target_storeid, $target_volname, 
$base_snapshot, $snapshot, $ratelimit_bps, $insecure, $with_snapshots, 
$logfunc) = @_;
+my ($cfg, $volid, $target_sshinfo, $target_storeid, $opts, $logfunc) = @_;
+
+my $base_snapshot = $opts->{base_snapshot};
+my $snapshot = $opts->{snapshot};
+my $ratelimit_bps = $opts->{ratelimit_bps};
+my $insecure = $opts->{insecure};
+my $with_snapshots = $opts->{with_snapshots} ? 1 : 0;
+my $allow_rename = $opts->{allow_rename} ? 1 : 0;
 
 my ($storeid, $volname) = parse_volume_id($volid);
-$target_volname = $volname if !$target_volname;
+my $target_volname = $opts->{target_volname} || $volname;
 
 my $scfg = storage_config($cfg, $storeid);
 
@@ -609,7 +616,6 @@ sub storage_migrate {
$import_fn = "tcp://$net";
 }
 
-$with_snapshots = $with_snapshots ? 1 : 0; # sanitize for passing as cli 
parameter
 my $send = ['pvesm', 'export', $volid, $format, '-', '-with-snapshots', 
$with_snapshots];
 my $recv = [@$ssh, '--', 'pvesm', 'import', $target_volid, $format, 
$import_fn, '-with-snapshots', $with_snapshots];
 if (defined($snapshot)) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 13/19] Allow specifying targetstorage for offline migration

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/API2/Qemu.pm | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 87fbd72..ec4c18c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3455,9 +3455,6 @@ __PACKAGE__->register_method({
$param->{online} = 0;
}
 
-   raise_param_exc({ targetstorage => "Live storage migration can only be 
done online." })
-   if !$param->{online} && $param->{targetstorage};
-
my $storecfg = PVE::Storage::config();
 
if (my $targetstorage = $param->{targetstorage}) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 12/19] Take note of changes to the volume IDs when migrating and update the config

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 22 +++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0bcbd04..14f3fcb 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -290,7 +290,9 @@ sub sync_disks {
 my $conf = $self->{vmconf};
 
 # local volumes which have been copied
+# and their old_id => new_id pairs
 $self->{volumes} = [];
+$self->{volume_map} = {};
 
 my $storecfg = $self->{storecfg};
 eval {
@@ -552,8 +554,11 @@ sub sync_disks {
'with_snapshots' => $local_volumes->{$volid}->{snapshots},
};
 
-   PVE::Storage::storage_migrate($storecfg, $volid, 
$self->{ssh_info},
- $targetsid, 
$storage_migrate_opts);
+   my $new_volid = PVE::Storage::storage_migrate($storecfg, 
$volid, $self->{ssh_info},
+ $targetsid, 
$storage_migrate_opts);
+
+   $self->{volume_map}->{$volid} = $new_volid;
+   $self->log('info', "volume '$volid' is '$new_volid' on the 
target\n");
}
}
 };
@@ -1109,6 +1114,13 @@ sub phase3_cleanup {
 
 my $tunnel = $self->{tunnel};
 
+# Needs to happen before printing the drives that where migrated via qemu,
+# since a new volume on the target might have the same ID as an old volume
+# on the source and update_volume_ids relies on matching IDs in the config.
+if ($self->{volume_map}) {
+   PVE::QemuConfig->update_volume_ids($conf, $self->{volume_map});
+}
+
 if ($self->{storage_migration}) {
# finish block-job with block-job-cancel, to disconnect source VM from 
NBD
# to avoid it trying to re-establish it. We are in blockjob ready state,
@@ -1123,11 +1135,15 @@ sub phase3_cleanup {
foreach my $target_drive (keys %{$self->{target_drive}}) {
my $drive = PVE::QemuServer::parse_drive($target_drive, 
$self->{target_drive}->{$target_drive}->{drivestr});
$conf->{$target_drive} = PVE::QemuServer::print_drive($drive);
-   PVE::QemuConfig->write_config($vmid, $conf);
}
}
 }
 
+# only write after successfully completing the migration
+if ($self->{volume_map} || $self->{storage_migration}) {
+   PVE::QemuConfig->write_config($vmid, $conf);
+}
+
 # transfer replication state before move config
 $self->transfer_replication_state() if $self->{replicated_volumes};
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 17/19] sync_disks: log output of storage_migrate

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---

Not sure about this one. On the one hand it adds even more to the
migration logs, which are already rather long. On the other hand it
might contain useful information.

 PVE/QemuMigrate.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 5ed953a..0a6277d 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -557,9 +557,10 @@ sub sync_disks {
'allow_rename' => !$local_volumes->{$volid}->{is_vmstate},
};
 
+   my $logfunc = sub { $self->log('info', $_[0]); };
my $new_volid = eval {
PVE::Storage::storage_migrate($storecfg, $volid, 
$self->{ssh_info},
- $targetsid, 
$storage_migrate_opts);
+ $targetsid, 
$storage_migrate_opts, $logfunc);
};
if (my $err = $@) {
die "storage migration for '$volid' to storage '$targetsid' 
failed - $err\n";
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 14/19] Update volume IDs in one go

2020-04-08 Thread Fabian Ebner
Use 'update_volume_ids' for the live-migrated disks as well.

Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 26 ++
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 14f3fcb..eb78537 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -838,14 +838,20 @@ sub phase2 {
my $source_drive = PVE::QemuServer::parse_drive($drive, 
$conf->{$drive});
my $target_drive = PVE::QemuServer::parse_drive($drive, 
$target->{drivestr});
 
-   my $source_sid = 
PVE::Storage::Plugin::parse_volume_id($source_drive->{file});
-   my $target_sid = 
PVE::Storage::Plugin::parse_volume_id($target_drive->{file});
+   my $source_volid = $source_drive->{file};
+   my $target_volid = $target_drive->{file};
+
+   my $source_sid = 
PVE::Storage::Plugin::parse_volume_id($source_volid);
+   my $target_sid = 
PVE::Storage::Plugin::parse_volume_id($target_volid);
 
my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
[$source_sid, $target_sid], $opt_bwlimit);
my $bitmap = $target->{bitmap};
 
$self->log('info', "$drive: start migration to $nbd_uri");
PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, 
undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
+
+   $self->{volume_map}->{$source_volid} = $target_volid;
+   $self->log('info', "volume '$source_volid' is '$target_volid' on 
the target\n");
}
 }
 
@@ -1114,13 +1120,6 @@ sub phase3_cleanup {
 
 my $tunnel = $self->{tunnel};
 
-# Needs to happen before printing the drives that where migrated via qemu,
-# since a new volume on the target might have the same ID as an old volume
-# on the source and update_volume_ids relies on matching IDs in the config.
-if ($self->{volume_map}) {
-   PVE::QemuConfig->update_volume_ids($conf, $self->{volume_map});
-}
-
 if ($self->{storage_migration}) {
# finish block-job with block-job-cancel, to disconnect source VM from 
NBD
# to avoid it trying to re-establish it. We are in blockjob ready state,
@@ -1131,16 +1130,11 @@ sub phase3_cleanup {
eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, 
$self->{storage_migration_jobs}) };
eval { PVE::QemuMigrate::cleanup_remotedisks($self) };
die "Failed to complete storage migration: $err\n";
-   } else {
-   foreach my $target_drive (keys %{$self->{target_drive}}) {
-   my $drive = PVE::QemuServer::parse_drive($target_drive, 
$self->{target_drive}->{$target_drive}->{drivestr});
-   $conf->{$target_drive} = PVE::QemuServer::print_drive($drive);
-   }
}
 }
 
-# only write after successfully completing the migration
-if ($self->{volume_map} || $self->{storage_migration}) {
+if ($self->{volume_map}) {
+   PVE::QemuConfig->update_volume_ids($conf, $self->{volume_map});
PVE::QemuConfig->write_config($vmid, $conf);
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 11/19] Allow parsing vmstate entries

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/QemuConfig.pm | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 78f5c60..7874081 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::AbstractConfig;
 use PVE::INotify;
+use PVE::JSONSchema;
 use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers;
@@ -90,7 +91,17 @@ sub valid_volume_keys {
 sub parse_volume {
 my ($class, $key, $volume_string, $noerr) = @_;
 
-my $volume = PVE::QemuServer::Drive::parse_drive($key, $volume_string);
+my $volume;
+if ($key eq 'vmstate') {
+   eval { PVE::JSONSchema::check_format('pve-volume-id', $volume_string) };
+   if (my $err = $@) {
+   return undef if $noerr;
+   die $err;
+   }
+   $volume = { 'file' => $volume_string };
+} else {
+   $volume = PVE::QemuServer::Drive::parse_drive($key, $volume_string);
+}
 
 die "unable to parse volume\n" if !defined($volume) && !$noerr;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH-SERIES v5] Make storage migration more flexible

2020-04-08 Thread Fabian Ebner
Previous discussion here: [0].

This series aims to allow offline migration with '--targetstorage'
and improve handling unsued/orphaned disks when migrating.
It also makes it possible to migrate volumes between storages with
a 'path' and storages without if the target storage uses the standard
naming scheme and the source format is supported (e.g. migrating raw
volumes between storages with a path and lvm storages).

It also adds an apiinfo call to pvesm that can be used to determine
APIVER and APIAGE of the remote node and does some general refactoring
regarding volume iterators.


The series is organised as follows:

#1-#3 make use of volume-related helpers
These are not needed for the rest of the series.

#4-#7 storage_migrate interface change
Dependencies: qemu-server,guest-common,container -> storage

#8-#15 do the work to allow offline --targetstorage and to avoid collisions
#14 is optional, but consolidates updating the volume IDs in the config
Dependencies: qemu-server -> guest-common,storage

#16-#19 further improvements
#19 makes migrating raw between dir and lvm possible


Changes from v4:
* rebased remaining patches on current master
  which made three non-trivial changes necessary:
* replaced a newly introduced foreach_drive
* increased storage API bump
* adapted check in cleanup_remotedisks()


[0]: https://pve.proxmox.com/pipermail/pve-devel/2020-March/042587.html


qemu-server:

Fabian Ebner (10):
  Switch to using foreach_volume instead of foreach_drive
  Use new storage_migrate interface
  Allow parsing vmstate entries
  Take note of changes to the volume IDs when migrating and update the
config
  Allow specifying targetstorage for offline migration
  Update volume IDs in one go
  sync_disks: use allow_rename to avoid collisions on the target storage
  sync_disks: be more verbose if storage_migrate fails
  sync_disks: log output of storage_migrate
  cleanup_remotedisks: also include those migrated with storage_migrate

 PVE/API2/Qemu.pm|  9 ++--
 PVE/QemuConfig.pm   | 15 ++-
 PVE/QemuMigrate.pm  | 82 +++--
 PVE/QemuServer.pm   | 71 ++--
 PVE/QemuServer/Cloudinit.pm |  2 +-
 PVE/QemuServer/Drive.pm | 61 ---
 PVE/VZDump/QemuServer.pm|  3 +-
 7 files changed, 138 insertions(+), 105 deletions(-)


container:

Fabian Ebner (3):
  Use foreach_volume instead of foreach_mountpoint-variants
  Use parse_volume instead of parse_ct-variants
  Use new storage_migrate interface

 src/PVE/API2/LXC.pm| 19 +++
 src/PVE/API2/LXC/Config.pm |  2 +-
 src/PVE/API2/LXC/Status.pm |  2 +-
 src/PVE/CLI/pct.pm |  7 ++--
 src/PVE/LXC.pm | 21 ++--
 src/PVE/LXC/Config.pm  | 69 +-
 src/PVE/LXC/Create.pm  |  4 +--
 src/PVE/LXC/Migrate.pm | 16 +
 src/PVE/VZDump/LXC.pm  |  2 +-
 src/lxc-pve-prestart-hook  |  2 +-
 10 files changed, 49 insertions(+), 95 deletions(-)


storage:

Fabian Ebner (4):
  Collect optional parameters for storage_migrate into $opts
  Add apiinfo helper to pvesm
  Introduce allow_rename parameter for pvesm import and storage_migrate
  storage_migrate: add volname_for_storage helper

 PVE/API2/Storage/Content.pm  |  2 +-
 PVE/CLI/pvesm.pm | 52 +---
 PVE/Storage.pm   | 94 ++--
 PVE/Storage/LVMPlugin.pm | 17 ---
 PVE/Storage/Plugin.pm| 16 --
 PVE/Storage/ZFSPoolPlugin.pm | 13 +++--
 6 files changed, 155 insertions(+), 39 deletions(-)


guest-common:

Fabian Ebner (2):
  Use new storage_migrate interface
  Add update_volume_ids

 PVE/AbstractConfig.pm | 30 ++
 PVE/Replication.pm| 12 ++--
 2 files changed, 40 insertions(+), 2 deletions(-)
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 01/19] Switch to using foreach_volume instead of foreach_drive

2020-04-08 Thread Fabian Ebner
It was necessary to move foreach_volid back to QemuServer.pm

In VZDump/QemuServer.pm and QemuMigrate.pm the dependency on
QemuConfig.pm was already there, just the explicit "use" was missing.

Signed-off-by: Fabian Ebner 
---
 PVE/API2/Qemu.pm|  6 ++--
 PVE/QemuConfig.pm   |  2 +-
 PVE/QemuMigrate.pm  |  5 +--
 PVE/QemuServer.pm   | 70 ++---
 PVE/QemuServer/Cloudinit.pm |  2 +-
 PVE/QemuServer/Drive.pm | 61 
 PVE/VZDump/QemuServer.pm|  3 +-
 7 files changed, 68 insertions(+), 81 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index b220934..87fbd72 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -63,7 +63,7 @@ my $NEW_DISK_RE = qr!^(([^/:\s]+):)?(\d+(\.\d+)?)$!;
 my $check_storage_access = sub {
my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = @_;
 
-   PVE::QemuServer::foreach_drive($settings, sub {
+   PVE::QemuConfig->foreach_volume($settings, sub {
my ($ds, $drive) = @_;
 
my $isCDROM = PVE::QemuServer::drive_is_cdrom($drive);
@@ -96,7 +96,7 @@ my $check_storage_access_clone = sub {
 
my $sharedvm = 1;
 
-   PVE::QemuServer::foreach_drive($conf, sub {
+   PVE::QemuConfig->foreach_volume($conf, sub {
my ($ds, $drive) = @_;
 
my $isCDROM = PVE::QemuServer::drive_is_cdrom($drive);
@@ -216,7 +216,7 @@ my $create_disks = sub {
}
 };
 
-eval { PVE::QemuServer::foreach_drive($settings, $code); };
+eval { PVE::QemuConfig->foreach_volume($settings, $code); };
 
 # free allocated images on error
 if (my $err = $@) {
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index d29b88b..78f5c60 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -66,7 +66,7 @@ sub has_feature {
 my ($class, $feature, $conf, $storecfg, $snapname, $running, $backup_only) 
= @_;
 
 my $err;
-PVE::QemuServer::foreach_drive($conf, sub {
+$class->foreach_volume($conf, sub {
my ($ds, $drive) = @_;
 
return if PVE::QemuServer::drive_is_cdrom($drive);
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index e83d60d..2925b90 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -17,6 +17,7 @@ use PVE::ReplicationState;
 use PVE::Storage;
 use PVE::Tools;
 
+use PVE::QemuConfig;
 use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
 use PVE::QemuServer::Helpers qw(min_version);
@@ -477,7 +478,7 @@ sub sync_disks {
}
 
my $live_replicatable_volumes = {};
-   PVE::QemuServer::foreach_drive($conf, sub {
+   PVE::QemuConfig->foreach_volume($conf, sub {
my ($ds, $drive) = @_;
 
my $volid = $drive->{file};
@@ -508,7 +509,7 @@ sub sync_disks {
# sizes in config have to be accurate for remote node to correctly
# allocate disks, rescan to be sure
my $volid_hash = PVE::QemuServer::scan_volids($storecfg, $vmid);
-   PVE::QemuServer::foreach_drive($conf, sub {
+   PVE::QemuConfig->foreach_volume($conf, sub {
my ($key, $drive) = @_;
my ($updated, $old_size, $new_size) = 
PVE::QemuServer::Drive::update_disksize($drive, $volid_hash);
if (defined($updated)) {
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 12f76b8..cd34bf6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -44,7 +44,7 @@ use PVE::QemuConfig;
 use PVE::QemuServer::Helpers qw(min_version config_aware_timeout);
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CPUConfig qw(print_cpu_device get_cpu_options);
-use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom parse_drive print_drive foreach_drive foreach_volid);
+use PVE::QemuServer::Drive qw(is_valid_drivename drive_is_cloudinit 
drive_is_cdrom parse_drive print_drive);
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Memory;
 use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -2037,7 +2037,7 @@ sub destroy_vm {
 
 if ($conf->{template}) {
# check if any base image is still used by a linked clone
-   foreach_drive($conf, sub {
+   PVE::QemuConfig->foreach_volume($conf, sub {
my ($ds, $drive) = @_;
return if drive_is_cdrom($drive);
 
@@ -2051,7 +2051,7 @@ sub destroy_vm {
 }
 
 # only remove disks owned by this VM
-foreach_drive($conf, sub {
+PVE::QemuConfig->foreach_volume($conf, sub {
my ($ds, $drive) = @_;
return if drive_is_cdrom($drive, 1);
 
@@ -2340,7 +2340,7 @@ sub check_local_resources {
 sub check_storage_availability {
 my ($storecfg, $conf, $node) = @_;
 
-foreach_drive($conf, sub {
+PVE::QemuConfig->foreach_volume($conf, sub {
my ($ds, $drive) = @_;
 
my $volid = $drive->{file};
@@ -2363,7 +2363,7 @@ sub shared_nodes {
 my $nodehash = { map { $_ => 1 } @$nodelist };
 my $nodename = nodename();
 
-foreach_drive($conf, sub {
+

[pve-devel] [PATCH v5 container 03/19] Use parse_volume instead of parse_ct-variants

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 src/PVE/API2/LXC.pm   | 15 +++
 src/PVE/CLI/pct.pm|  3 +--
 src/PVE/LXC.pm|  7 +++
 src/PVE/LXC/Config.pm | 34 ++
 4 files changed, 17 insertions(+), 42 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 58f38f8..3a8694a 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1391,9 +1391,7 @@ __PACKAGE__->register_method({
my $value = $src_conf->{$opt};
 
if (($opt eq 'rootfs') || ($opt =~ m/^mp\d+$/)) {
-   my $mp = $opt eq 'rootfs' ?
-   PVE::LXC::Config->parse_ct_rootfs($value) :
-   PVE::LXC::Config->parse_ct_mountpoint($value);
+   my $mp = PVE::LXC::Config->parse_volume($opt, $value);
 
if ($mp->{type} eq 'volume') {
my $volid = $mp->{volume};
@@ -1623,8 +1621,7 @@ __PACKAGE__->register_method({
my $running = PVE::LXC::check_running($vmid);
 
my $disk = $param->{disk};
-   my $mp = $disk eq 'rootfs' ? 
PVE::LXC::Config->parse_ct_rootfs($conf->{$disk}) :
-   PVE::LXC::Config->parse_ct_mountpoint($conf->{$disk});
+   my $mp = PVE::LXC::Config->parse_volume($disk, $conf->{$disk});
 
my $volid = $mp->{volume};
 
@@ -1786,13 +1783,7 @@ __PACKAGE__->register_method({
 
die "cannot move volumes of a running container\n" if 
PVE::LXC::check_running($vmid);
 
-   if ($mpkey eq 'rootfs') {
-   $mpdata = PVE::LXC::Config->parse_ct_rootfs($conf->{$mpkey});
-   } elsif ($mpkey =~ m/mp\d+/) {
-   $mpdata = 
PVE::LXC::Config->parse_ct_mountpoint($conf->{$mpkey});
-   } else {
-   die "Can't parse $mpkey\n";
-   }
+   PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
$old_volid = $mpdata->{volume};
 
die "you can't move a volume with snapshots and delete the source\n"
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 33f564f..54bd7a7 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -233,8 +233,7 @@ __PACKAGE__->register_method ({
 
defined($conf->{$device}) || die "cannot run command on 
non-existing mount point $device\n";
 
-   my $mount_point = $device eq 'rootfs' ? 
PVE::LXC::Config->parse_ct_rootfs($conf->{$device}) :
-   PVE::LXC::Config->parse_ct_mountpoint($conf->{$device});
+   my $mount_point = PVE::LXC::Config->parse_volume($device, 
$conf->{$device});
 
die "cannot run fsck when container is running\n"
if PVE::LXC::check_running($vmid);
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 06524ef..9adb366 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -206,7 +206,7 @@ sub vmstatus {
$d->{disk} = 0;
# use 4GB by default ??
if (my $rootfs = $conf->{rootfs}) {
-   my $rootinfo = PVE::LXC::Config->parse_ct_rootfs($rootfs);
+   my $rootinfo = PVE::LXC::Config->parse_volume('rootfs', 
$rootfs);
$d->{maxdisk} = $rootinfo->{size} || (4*1024*1024*1024);
} else {
$d->{maxdisk} = 4*1024*1024*1024;
@@ -687,7 +687,7 @@ sub update_lxc_config {
 die "missing 'rootfs' configuration\n"
if !defined($conf->{rootfs});
 
-my $mountpoint = PVE::LXC::Config->parse_ct_rootfs($conf->{rootfs});
+my $mountpoint = PVE::LXC::Config->parse_volume('rootfs', $conf->{rootfs});
 
 $raw .= "lxc.rootfs.path = $dir/rootfs\n";
 
@@ -1199,8 +1199,7 @@ sub check_ct_modify_config_perm {
} elsif ($opt eq 'rootfs' || $opt =~ /^mp\d+$/) {
$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
return if $delete;
-   my $data = $opt eq 'rootfs' ? 
PVE::LXC::Config->parse_ct_rootfs($newconf->{$opt})
-   : 
PVE::LXC::Config->parse_ct_mountpoint($newconf->{$opt});
+   my $data = PVE::LXC::Config->parse_volume($opt, $newconf->{$opt});
raise_perm_exc("mount point type $data->{type} is only allowed for 
root\@pam")
if $data->{type} ne 'volume';
} elsif ($opt eq 'memory' || $opt eq 'swap') {
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index e613e78..e7fe99b 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -129,7 +129,7 @@ sub __snapshot_delete_remove_drive {
die "implement me - saving vmstate\n";
 } else {
my $value = $snap->{$remove_drive};
-   my $mountpoint = $remove_drive eq 'rootfs' ? 
$class->parse_ct_rootfs($value, 1) : $class->parse_ct_mountpoint($value, 1);
+   my $mountpoint = $class->parse_volume($remove_drive, $value, 1);
delete $snap->{$remove_drive};
 
$class->add_unused_volume($snap, $mountpoint->{volume})
@@ -956,7 +956,7 @@ sub 

[pve-devel] [PATCH v5 container 02/19] Use foreach_volume instead of foreach_mountpoint-variants

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 src/PVE/API2/LXC.pm|  4 ++--
 src/PVE/API2/LXC/Config.pm |  2 +-
 src/PVE/API2/LXC/Status.pm |  2 +-
 src/PVE/CLI/pct.pm |  4 ++--
 src/PVE/LXC.pm | 14 +++---
 src/PVE/LXC/Config.pm  | 35 +--
 src/PVE/LXC/Create.pm  |  4 ++--
 src/PVE/LXC/Migrate.pm |  6 +++---
 src/PVE/VZDump/LXC.pm  |  2 +-
 src/lxc-pve-prestart-hook  |  2 +-
 10 files changed, 25 insertions(+), 50 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index f4c1a49..58f38f8 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -316,7 +316,7 @@ __PACKAGE__->register_method({
 
# check storage access, activate storage
my $delayed_mp_param = {};
-   PVE::LXC::Config->foreach_mountpoint($mp_param, sub {
+   PVE::LXC::Config->foreach_volume($mp_param, sub {
my ($ms, $mountpoint) = @_;
 
my $volid = $mountpoint->{volume};
@@ -371,7 +371,7 @@ __PACKAGE__->register_method({
$mp_param = $orig_mp_param;
die "rootfs configuration could not be recovered, 
please check and specify manually!\n"
if !defined($mp_param->{rootfs});
-   PVE::LXC::Config->foreach_mountpoint($mp_param, sub {
+   PVE::LXC::Config->foreach_volume($mp_param, sub {
my ($ms, $mountpoint) = @_;
my $type = $mountpoint->{type};
if ($type eq 'volume') {
diff --git a/src/PVE/API2/LXC/Config.pm b/src/PVE/API2/LXC/Config.pm
index 0879172..42e16d1 100644
--- a/src/PVE/API2/LXC/Config.pm
+++ b/src/PVE/API2/LXC/Config.pm
@@ -173,7 +173,7 @@ __PACKAGE__->register_method({
my $repl_conf = PVE::ReplicationConfig->new();
my $is_replicated = $repl_conf->check_for_existing_jobs($vmid, 1);
if ($is_replicated) {
-   PVE::LXC::Config->foreach_mountpoint_full($param, 0, sub {
+   PVE::LXC::Config->foreach_volume($param, sub {
my ($opt, $mountpoint) = @_;
my $volid = $mountpoint->{volume};
return if !$volid || !($mountpoint->{replicate}//1);
diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm
index 41f1f4f..03d13a3 100644
--- a/src/PVE/API2/LXC/Status.pm
+++ b/src/PVE/API2/LXC/Status.pm
@@ -182,7 +182,7 @@ __PACKAGE__->register_method({
}
 
if ($conf->{unprivileged}) {
-   PVE::LXC::Config->foreach_mountpoint($conf, sub {
+   PVE::LXC::Config->foreach_volume($conf, sub {
my ($ms, $mountpoint) = @_;
die "Quotas are not supported by unprivileged 
containers.\n" if $mountpoint->{quota};
});
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 95c6921..33f564f 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -379,7 +379,7 @@ __PACKAGE__->register_method({
my @len = map { length($_) } @{$list[0]};
 
eval {
-   PVE::LXC::Config->foreach_mountpoint($conf, sub {
+   PVE::LXC::Config->foreach_volume($conf, sub {
my ($name, $mp) = @_;
my $path = $mp->{mp};
 
@@ -783,7 +783,7 @@ __PACKAGE__->register_method ({
eval {
my $path = "";
PVE::LXC::mount_all($vmid, $storecfg, $conf);
-   PVE::LXC::Config->foreach_mountpoint($conf, sub {
+   PVE::LXC::Config->foreach_volume($conf, sub {
my ($name, $mp) = @_;
$path = $mp->{mp};
my $cmd = ["fstrim", "-v", "$rootdir$path"];
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index b4ffc9b..06524ef 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -832,7 +832,7 @@ sub delete_mountpoint_volume {
 sub destroy_lxc_container {
 my ($storage_cfg, $vmid, $conf, $replacement_conf) = @_;
 
-PVE::LXC::Config->foreach_mountpoint($conf, sub {
+PVE::LXC::Config->foreach_volume($conf, sub {
my ($ms, $mountpoint) = @_;
delete_mountpoint_volume($storage_cfg, $vmid, $mountpoint->{volume});
 });
@@ -1163,7 +1163,7 @@ sub template_create {
 
 my $storecfg = PVE::Storage::config();
 
-PVE::LXC::Config->foreach_mountpoint($conf, sub {
+PVE::LXC::Config->foreach_volume($conf, sub {
my ($ms, $mountpoint) = @_;
 
my $volid = $mountpoint->{volume};
@@ -1172,7 +1172,7 @@ sub template_create {
if !PVE::Storage::volume_has_feature($storecfg, 'template', $volid);
 });
 
-PVE::LXC::Config->foreach_mountpoint($conf, sub {
+PVE::LXC::Config->foreach_volume($conf, sub {
my ($ms, $mountpoint) = @_;
 
my $volid = $mountpoint->{volume};
@@ -1237,7 +1237,7 @@ sub umount_all {
 
 my $res = 1;
 
-PVE::LXC::Config->foreach_mountpoint_reverse($conf, sub {
+

[pve-devel] [PATCH v5 guest-common 08/19] Add update_volume_ids

2020-04-08 Thread Fabian Ebner
This function is intened to be used after doing a migration where some
of the volume IDs changed.

Signed-off-by: Fabian Ebner 
---
 PVE/AbstractConfig.pm | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 88522f8..beb10c7 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -466,6 +466,36 @@ sub foreach_volume {
 return $class->foreach_volume_full($conf, undef, $func, @param);
 }
 
+# $volume_map is a hash of 'old_volid' => 'new_volid' pairs.
+# This method replaces 'old_volid' by 'new_volid' throughout
+# the config including snapshots and unused and vmstate volumes
+sub update_volume_ids {
+my ($class, $conf, $volume_map) = @_;
+
+my $volid_key = $class->volid_key();
+
+my $do_replace = sub {
+   my ($key, $volume, $current_section) = @_;
+
+   my $old_volid = $volume->{$volid_key};
+   if (my $new_volid = $volume_map->{$old_volid}) {
+   $volume->{$volid_key} = $new_volid;
+   $current_section->{$key} = $class->print_volume($key, $volume);
+   }
+};
+
+my $opts = {
+   'include_unused' => 1,
+   'extra_keys' => ['vmstate'],
+};
+
+$class->foreach_volume_full($conf, $opts, $do_replace, $conf);
+foreach my $snap (keys %{$conf->{snapshots}}) {
+   my $snap_conf = $conf->{snapshots}->{$snap};
+   $class->foreach_volume_full($snap_conf, $opts, $do_replace, $snap_conf);
+}
+}
+
 # Returns whether the template parameter is set in $conf.
 sub is_template {
 my ($class, $conf) = @_;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 container 07/19] Use new storage_migrate interface

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 src/PVE/LXC/Migrate.pm | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index 638ce1f..d0be6d4 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -278,17 +278,21 @@ sub phase1 {
 }
 
 my $opts = $self->{opts};
-my $insecure = $opts->{migration_type} eq 'insecure';
 foreach my $volid (keys %$volhash) {
next if $rep_volumes->{$volid};
my ($sid, $volname) = PVE::Storage::parse_volume_id($volid);
push @{$self->{volumes}}, $volid;
-   my $with_snapshots = $volhash->{$volid}->{snapshots};
my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', [$sid], 
$opts->{bwlimit});
# JSONSchema and get_bandwidth_limit use kbps - storage_migrate bps
$bwlimit = $bwlimit * 1024 if defined($bwlimit);
 
-   PVE::Storage::storage_migrate($self->{storecfg}, $volid, 
$self->{ssh_info}, $sid, undef, undef, undef, $bwlimit, $insecure, 
$with_snapshots);
+   my $storage_migrate_opts = {
+   'bwlimit' => $bwlimit,
+   'insecure' => $opts->{migration_type} eq 'insecure',
+   'with_snapshots' => $volhash->{$volid}->{snapshots},
+   };
+
+   PVE::Storage::storage_migrate($self->{storecfg}, $volid, 
$self->{ssh_info}, $sid, $storage_migrate_opts);
 }
 
 my $conffile = PVE::LXC::Config->config_file($vmid);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v5 qemu-server 06/19] Use new storage_migrate interface

2020-04-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner 
---
 PVE/QemuMigrate.pm | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 2925b90..0bcbd04 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -541,15 +541,19 @@ sub sync_disks {
next if $self->{replicated_volumes}->{$volid};
push @{$self->{volumes}}, $volid;
my $opts = $self->{opts};
-   my $insecure = $opts->{migration_type} eq 'insecure';
-   my $with_snapshots = $local_volumes->{$volid}->{snapshots};
# use 'migrate' limit for transfer to other node
my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', 
[$targetsid, $sid], $opts->{bwlimit});
# JSONSchema and get_bandwidth_limit use kbps - storage_migrate 
bps
$bwlimit = $bwlimit * 1024 if defined($bwlimit);
 
-   PVE::Storage::storage_migrate($storecfg, $volid, 
$self->{ssh_info}, $targetsid,
- undef, undef, undef, $bwlimit, 
$insecure, $with_snapshots);
+   my $storage_migrate_opts = {
+   'bwlimit' => $bwlimit,
+   'insecure' => $opts->{migration_type} eq 'insecure',
+   'with_snapshots' => $local_volumes->{$volid}->{snapshots},
+   };
+
+   PVE::Storage::storage_migrate($storecfg, $volid, 
$self->{ssh_info},
+ $targetsid, 
$storage_migrate_opts);
}
}
 };
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v3 manager 1/3] Move wipe_disks to PVE::Diskmanage

2020-04-08 Thread Dominik Csapak

Series works, test works, could not find anything obviously wrong

Reviewed-by: Dominik Csapak 
Tested-by: Dominik Csapak 

On 3/11/20 2:05 PM, Dominic Jäger wrote:

Move wipe_disks from PVE::Ceph::Tools to PVE::Diskmanage.
Relies on the corresponding patch in pve-storage.

Signed-off-by: Dominic Jäger 
---
v2->v3: unchanged
v1->v2: Fix syntax

To test this we need an OSD that is not managed by ceph-volume:
  - Create a PVE 5 VM
  - Install Ceph Luminous
  - Create an OSD
  - Upgrade to PVE 6 but don't update Ceph to Nautilus

  PVE/API2/Ceph/OSD.pm |  4 ++--
  PVE/Ceph/Tools.pm| 22 --
  2 files changed, 2 insertions(+), 24 deletions(-)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index a514c502..fec53fbf 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -462,7 +462,7 @@ __PACKAGE__->register_method ({
push @$cmd, '--data', $devpath;
push @$cmd, '--dmcrypt' if $param->{encrypted};
  
-		PVE::Ceph::Tools::wipe_disks($devpath);

+   PVE::Diskmanage::wipe_blockdevices([$devpath]);
  
  		run_command($cmd);

});
@@ -551,7 +551,7 @@ __PACKAGE__->register_method ({
my $partnum = PVE::Diskmanage::get_partnum($part);
my $devpath = PVE::Diskmanage::get_blockdev($part);
  
-		PVE::Ceph::Tools::wipe_disks($part);

+   PVE::Diskmanage::wipe_blockdevices([$part]);
print "remove partition $part (disk '${devpath}', partnum 
$partnum)\n";
eval { run_command(['/sbin/sgdisk', '-d', $partnum, 
"${devpath}"]); };
warn $@ if $@;
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index e6225b78..7d2e8469 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -270,28 +270,6 @@ sub get_or_create_admin_keyring {
  return $pve_ckeyring_path;
  }
  
-# wipe the first 200 MB to clear off leftovers from previous use, otherwise a

-# create OSD fails.
-sub wipe_disks {
-my (@devs) = @_;
-
-my @wipe_cmd = qw(/bin/dd if=/dev/zero bs=1M conv=fdatasync);
-
-foreach my $devpath (@devs) {
-   my $devname = basename($devpath);
-   my $dev_size = 
PVE::Tools::file_get_contents("/sys/class/block/$devname/size");
-
-   ($dev_size) = $dev_size =~ m|(\d+)|; # untaint $dev_size
-   die "Coulnd't get the size of the device $devname\n" if 
(!defined($dev_size));
-
-   my $size = ($dev_size * 512 / 1024 / 1024);
-   my $count = ($size < 200) ? $size : 200;
-
-   print "wipe disk/partition: $devpath\n";
-   eval { run_command([@wipe_cmd, "count=$count", "of=${devpath}"]) };
-   warn $@ if $@;
-}
-};
  
  # get ceph-volume managed osds

  sub ceph_volume_list {



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH access-control v2] auth ldap/ad: make password a parameter for the api

2020-04-08 Thread Dominik Csapak
Instead of simply requiring it to exist in /etc/pve.

Takes after the password handling of CIFS in pve-storage.

Signed-off-by: Dominik Csapak 
---
changes from v1:
* use 'realm' dir instead of 'ldap' (with fallback and comment to removal)

 PVE/API2/Domains.pm | 26 +++
 PVE/Auth/AD.pm  |  1 +
 PVE/Auth/LDAP.pm| 80 -
 PVE/Auth/Plugin.pm  | 15 +
 4 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Domains.pm b/PVE/API2/Domains.pm
index 8ae1db0..b25a5fe 100644
--- a/PVE/API2/Domains.pm
+++ b/PVE/API2/Domains.pm
@@ -88,6 +88,9 @@ __PACKAGE__->register_method ({
 code => sub {
my ($param) = @_;
 
+   # always extract, add it with hook
+   my $password = extract_param($param, 'password');
+
PVE::Auth::Plugin::lock_domain_config(
sub {
 
@@ -117,6 +120,13 @@ __PACKAGE__->register_method ({
 
$ids->{$realm} = $config;
 
+   my $opts = $plugin->options();
+   if (defined($password) && !defined($opts->{password})) {
+   $password = undef;
+   warn "ignoring password parameter";
+   }
+   $plugin->on_add_hook($realm, $config, password => $password);
+
cfs_write_file($domainconfigfile, $cfg);
}, "add auth server failed");
 
@@ -137,6 +147,9 @@ __PACKAGE__->register_method ({
 code => sub {
my ($param) = @_;
 
+   # always extract, update in hook
+   my $password = extract_param($param, 'password');
+
PVE::Auth::Plugin::lock_domain_config(
sub {
 
@@ -154,8 +167,10 @@ __PACKAGE__->register_method ({
my $delete_str = extract_param($param, 'delete');
die "no options specified\n" if !$delete_str && !scalar(keys 
%$param);
 
+   my $delete_pw = 0;
foreach my $opt (PVE::Tools::split_list($delete_str)) {
delete $ids->{$realm}->{$opt};
+   $delete_pw = 1 if $opt eq 'password';
}
 
my $plugin = PVE::Auth::Plugin->lookup($ids->{$realm}->{type});
@@ -171,6 +186,14 @@ __PACKAGE__->register_method ({
$ids->{$realm}->{$p} = $config->{$p};
}
 
+   my $opts = $plugin->options();
+   if ($delete_pw || ($opts->{password} && defined($password))) {
+   $plugin->on_update_hook($realm, $config, password => 
$password);
+   } else {
+   warn "ignoring password parameter" if defined($password);
+   $plugin->on_update_hook($realm, $config);
+   }
+
cfs_write_file($domainconfigfile, $cfg);
}, "update auth server failed");
 
@@ -238,6 +261,9 @@ __PACKAGE__->register_method ({
 
die "domain '$realm' does not exist\n" if !$ids->{$realm};
 
+   my $plugin = PVE::Auth::Plugin->lookup($ids->{$realm}->{type});
+   $plugin->on_delete_hook($realm, $ids->{$realm});
+
delete $ids->{$realm};
 
cfs_write_file($domainconfigfile, $cfg);
diff --git a/PVE/Auth/AD.pm b/PVE/Auth/AD.pm
index 4f40be3..24b0e9f 100755
--- a/PVE/Auth/AD.pm
+++ b/PVE/Auth/AD.pm
@@ -83,6 +83,7 @@ sub options {
certkey => { optional => 1 },
base_dn => { optional => 1 },
bind_dn => { optional => 1 },
+   password => { optional => 1 },
user_attr => { optional => 1 },
filter => { optional => 1 },
sync_attributes => { optional => 1 },
diff --git a/PVE/Auth/LDAP.pm b/PVE/Auth/LDAP.pm
index 905cc47..315705e 100755
--- a/PVE/Auth/LDAP.pm
+++ b/PVE/Auth/LDAP.pm
@@ -37,6 +37,11 @@ sub properties {
optional => 1,
maxLength => 256,
},
+   password => {
+   description => "LDAP bind password. Will be stored in 
'/etc/pve/priv/realm/.pw'.",
+   type => 'string',
+   optional => 1,
+   },
verify => {
description => "Verify the server's SSL certificate",
type => 'boolean',
@@ -126,6 +131,7 @@ sub options {
server2 => { optional => 1 },
base_dn => {},
bind_dn => { optional => 1 },
+   password => { optional => 1 },
user_attr => {},
port => { optional => 1 },
secure => { optional => 1 },
@@ -185,7 +191,7 @@ sub connect_and_bind {
 
 if ($config->{bind_dn}) {
$bind_dn = $config->{bind_dn};
-   $bind_pass = 
PVE::Tools::file_read_firstline("/etc/pve/priv/ldap/${realm}.pw");
+   $bind_pass = ldap_get_credentials($realm);
die "missing password for realm $realm\n" if !defined($bind_pass);
 }
 
@@ -343,4 +349,76 @@ sub authenticate_user {
 return 1;
 }
 
+my $ldap_pw_dir = "/etc/pve/priv/realm";
+
+# check paramter should be removed and pw file should be moved with 7.0
+sub ldap_cred_filename {
+my ($realm, $check) = @_;
+
+