Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option

2019-11-13 Thread Dietmar Maurer
> The main reason for this is to identify backups residing on an old backup 
> store like an archive.
>  
> 
>  
> But I am open. Would you prefer having a manifest included in the archive or 
> as a separate file on the same storage?

The backup archive already contains the full VM config. I thought the manifest 
should be
an extra file on the same storage.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option

2019-11-13 Thread Marco Gabriel - inett GmbH
I totally agree, but wanted to point out that while this solution is not the 
prettiest, but the information is exactly where it needs to be.

The main reason for this is to identify backups residing on an old backup store 
like an archive.

But I am open. Would you prefer having a manifest included in the archive or as 
a separate file on the same storage?

Marco

Holen Sie sich Outlook für Android


[cid:inett5-126x70_81c2a0fa-28f3-411c-8d4f-6b70748d931f.png]

Ihr IT Systemhaus
in Saarbrücken

[cid:Facebook_f91e3de6-689a-4957-b6ca-dbf3ca3c7f7b.png][cid:Twitter_ab6c04c5-c40f-43ea-a66b-5114ca78bf2d.png][cid:RSS_170102a2-2aa4-4995-8b7c-d88dd57e8397.png]
Marco Gabriel
Geschäftsführer

t: 0681-410993-0
e: mgabr...@inett.de


www.inett.de

Technische Fragen:
Support E-Mail: supp...@inett.de
Support Hotline: 0681-410993-42


inett GmbH | Mainzer Str. 183 | 66121 Saarbrücken
Geschäftsführung: Marco Gabriel | Amtsgericht Saarbrücken HRB 16588


From: Dietmar Maurer 
Sent: Wednesday, November 13, 2019 4:42:28 PM
To: PVE development discussion ; Marco Gabriel - 
inett GmbH 
Subject: Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added 
"includename" option

> IMHO this is the wrong way to store additional information about
> the backup.

I am thinking about adding a manifest.json file which may contain
such information 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option

2019-11-13 Thread Dietmar Maurer
> IMHO this is the wrong way to store additional information about
> the backup. 

I am thinking about adding a manifest.json file which may contain
such information 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option

2019-11-13 Thread Dietmar Maurer
IMHO this is the wrong way to store additional information about
the backup. 

> On 13 November 2019 15:02 Marco Gabriel  wrote:
> 
>  
> Signed-off-by: Marco Gabriel 
> ---
>  PVE/VZDump/Common.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
> index 4789a50..0a70b0c 100644
> --- a/PVE/VZDump/Common.pm
> +++ b/PVE/VZDump/Common.pm
> @@ -213,6 +213,12 @@ my $confdesc = {
>   type => 'string',
>   description => 'Backup all known guest systems included in the 
> specified pool.',
>   optional => 1,
> +},
> +includename => {
> + type => 'boolean',
> + description => 'Include name of VM in backup file name.',
> + optional => 1,
> + default => 0,
>  }
>  };
>  
> -- 
> 2.20.1
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-guest-common 0/1] vzdump: added "includename" option which adds the name or hostname of a vm or container to a backup filename

2019-11-13 Thread Marco Gabriel - inett GmbH
Thanks for clarification.

I'd update the wiki page accordingly.

Best regards,
Marco


inett GmbH
>> Ihr IT Systemhaus in  Saarbrücken

Marco Gabriel
Geschäftsführer

t: 0681-410993-0
e: mgabr...@inett.de
www.inett.de

Technische Fragen:
Support E-Mail: supp...@inett.de
Support Hotline: 0681-410993-42


inett GmbH | Geschäftsführung: Marco Gabriel | Amtsgericht Saarbrücken HRB 16588
-Ursprüngliche Nachricht-
Von: Thomas Lamprecht  
Gesendet: Mittwoch, 13. November 2019 15:12
An: PVE development discussion ; Marco Gabriel - 
inett GmbH 
Betreff: Re: [pve-devel] [PATCH pve-guest-common 0/1] vzdump: added 
"includename" option which adds the name or hostname of a vm or container to a 
backup filename

On 11/13/19 3:02 PM, Marco Gabriel wrote:
> *** BLURB HERE ***
> 
> Marco Gabriel (1):
>   vzdump: added "includename" option
> 
>  PVE/VZDump/Common.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 

FYI, some pointers for sending patches with git/mail

It's nice if related patches are send together, even if they are from different 
repositories. This can be done with first using `git format-patch` and then do 
a single git send-email over all generated patches. For example, lets go to a 
few repos and format the most recent commit as patch to /tmp/patchq, then send 
it:

# cd pve-manager; git format-patch -s -o /tmp/patchq -1 # cd 
../pve-guest-common; git format-patch -s -o /tmp/patchq -1 # cd ../pve-docs; 
git format-patch -s -o /tmp/patchq -1 # git send-email --compose 
--to=pve-devel@pve.proxmox.com /tmp/patchq/*

Using "start-number" and the like can improve this further, but it's a good 
start.

Another thing would is: if you do not fill out the cover letter, (see your 
BLURP HERE) anyway, it's better to omit it completely, which can be totally 
fine for small series or a single specific feature where the commit message(s) 
are enough, and no common overview or rationale is required to understand the 
goal and method used in a series.

Just a few pointer valid for anybody sending patches to developer mailing lists.

cheers,
Thomas

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-guest-common 0/1] vzdump: added "includename" option which adds the name or hostname of a vm or container to a backup filename

2019-11-13 Thread Thomas Lamprecht
On 11/13/19 3:02 PM, Marco Gabriel wrote:
> *** BLURB HERE ***
> 
> Marco Gabriel (1):
>   vzdump: added "includename" option
> 
>  PVE/VZDump/Common.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 

FYI, some pointers for sending patches with git/mail

It's nice if related patches are send together, even if they are from
different repositories. This can be done with first using
`git format-patch` and then do a single git send-email over all generated
patches. For example, lets go to a few repos and format the most recent
commit as patch to /tmp/patchq, then send it:

# cd pve-manager; git format-patch -s -o /tmp/patchq -1
# cd ../pve-guest-common; git format-patch -s -o /tmp/patchq -1
# cd ../pve-docs; git format-patch -s -o /tmp/patchq -1
# git send-email --compose --to=pve-devel@pve.proxmox.com /tmp/patchq/*

Using "start-number" and the like can improve this further, but it's a good
start.

Another thing would is: if you do not fill out the cover letter, (see your
BLURP HERE) anyway, it's better to omit it completely, which can be totally
fine for small series or a single specific feature where the commit message(s)
are enough, and no common overview or rationale is required to understand the
goal and method used in a series.

Just a few pointer valid for anybody sending patches to developer mailing
lists.

cheers,
Thomas

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-guest-common 0/1] vzdump: added "includename" option which adds the name or hostname of a vm or container to a backup filename

2019-11-13 Thread Marco Gabriel
*** BLURB HERE ***

Marco Gabriel (1):
  vzdump: added "includename" option

 PVE/VZDump/Common.pm | 6 ++
 1 file changed, 6 insertions(+)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-guest-common 1/1] vzdump: added "includename" option

2019-11-13 Thread Marco Gabriel
Signed-off-by: Marco Gabriel 
---
 PVE/VZDump/Common.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..0a70b0c 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -213,6 +213,12 @@ my $confdesc = {
type => 'string',
description => 'Backup all known guest systems included in the 
specified pool.',
optional => 1,
+},
+includename => {
+   type => 'boolean',
+   description => 'Include name of VM in backup file name.',
+   optional => 1,
+   default => 0,
 }
 };
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-docs 1/1] vzdump: docs: added "includename" option in vzdump docs

2019-11-13 Thread Marco Gabriel
Signed-off-by: Marco Gabriel 
---
 vzdump.adoc | 8 
 1 file changed, 8 insertions(+)

diff --git a/vzdump.adoc b/vzdump.adoc
index fd7f13f..624bf3b 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -147,6 +147,14 @@ That way it is possible to store several backup in the same
 directory. The parameter `maxfiles` can be used to specify the
 maximum number of backups to keep.
 
+To include the name of a qemu vm or the hostname of a container,
+check the `Include VM name` option. Backup filenames will then
+look like
+
+ vzdump-lxc-105-2009_10_09-11_04_43..tar
+ vzdump-qemu-101-2019_11_12-14_56_19.https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-docs 0/1] vzdump: added "includename" option which adds the name or hostname of a vm or container to a backup filename

2019-11-13 Thread Marco Gabriel
*** BLURB HERE ***

Marco Gabriel (1):
  vzdump: docs: added "includename" option in vzdump docs

 vzdump.adoc | 8 
 1 file changed, 8 insertions(+)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager 0/1] vzdump: added "includename" option which adds the name or hostname of a vm or container to a backup filename

2019-11-13 Thread Marco Gabriel
*** BLURB HERE ***

Marco Gabriel (1):
  vzdump: added "includename" option

 PVE/VZDump/Common.pm | 6 ++
 1 file changed, 6 insertions(+)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager 1/1] vzdump: added "includename" option

2019-11-13 Thread Marco Gabriel
Signed-off-by: Marco Gabriel 
---
 PVE/VZDump/Common.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/VZDump/Common.pm b/PVE/VZDump/Common.pm
index 4789a50..0a70b0c 100644
--- a/PVE/VZDump/Common.pm
+++ b/PVE/VZDump/Common.pm
@@ -213,6 +213,12 @@ my $confdesc = {
type => 'string',
description => 'Backup all known guest systems included in the 
specified pool.',
optional => 1,
+},
+includename => {
+   type => 'boolean',
+   description => 'Include name of VM in backup file name.',
+   optional => 1,
+   default => 0,
 }
 };
 
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager 1/1] vzdump: added "includename" option which adds the name or hostname of a vm or container to a backup filename

2019-11-13 Thread Marco Gabriel
Signed-off-by: Marco Gabriel 
---
 PVE/API2/Backup.pm|  6 ++
 PVE/VZDump.pm | 19 ++-
 www/manager6/dc/Backup.js | 10 +-
 3 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Backup.pm b/PVE/API2/Backup.pm
index 86377c0a..316d6800 100644
--- a/PVE/API2/Backup.pm
+++ b/PVE/API2/Backup.pm
@@ -100,6 +100,12 @@ __PACKAGE__->register_method({
description => "Enable or disable the job.",
default => '1',
},
+   includename => {
+   type => 'boolean',
+   optional => 1,
+   description => "Add VM name to backup file",
+   default => '0',
+   },
}),
 },
 returns => { type => 'null' },
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index a829a52e..f6d7bc61 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -620,10 +620,27 @@ sub exec_backup_task {
 
my $vmtype = $plugin->type();
 
+   my $conf;
+   my $name;
+
+   if ($opts->{includename}) {
+   if ($vmtype eq 'lxc') {
+   $conf = PVE::LXC::Config->load_config($vmid);
+   $name = "-" . $conf->{hostname};
+   } elsif ($vmtype eq 'qemu') {
+   $conf = PVE::QemuConfig->load_config($vmid);
+   $name = "-" . $conf->{name};
+   } else {
+   die "Unknown VM Type: '$vmtype'\n";
+   }
+   } else {
+   $name = "";
+   }
+
my $tmplog = "$logdir/$vmtype-$vmid.log";
 
my $bkname = "vzdump-$vmtype-$vmid";
-   my $basename = $bkname . strftime("-%Y_%m_%d-%H_%M_%S", localtime());
+   my $basename = $bkname . strftime("-%Y_%m_%d-%H_%M_%S", localtime()) . 
$name;
 
my $maxfiles = $opts->{maxfiles};
 
diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup.js
index 28275f01..53dbc7a4 100644
--- a/www/manager6/dc/Backup.js
+++ b/www/manager6/dc/Backup.js
@@ -186,7 +186,15 @@ Ext.define('PVE.dc.BackupEdit', {
allowBlank: false
},
selModeField,
-   selPool
+   selPool,
+   {
+   xtype: 'proxmoxcheckbox',
+   fieldLabel: gettext('Include VM Name'),
+   name: 'includename',
+   uncheckedValue: 0,
+   defaultValue: 0,
+   checked: false
+   }
];
 
var column2 = [
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feature: add vm name or ct hostname to backup file name as requested in #438

2019-11-13 Thread Thomas Lamprecht
Hi,

On 11/13/19 2:44 PM, Marco Gabriel - inett GmbH wrote:
> Hi,
> 
> long time ago, I requested adding the container hostname or vm name in 
> bugzilla request #438. As I wanted to practice developing features for pve 
> and how to use the git repos, I implemented the feature. I should have asked 
> before, but I haven't. So I am not upset if the feature won't make it into 
> upstream. I'm going to try sending them by mail in additional patch mails.

More contributors sound great! :)
I'd not oppose a implementation of the feature, but the how could be a bit
debatable. So if you already have ideas or even a patch you can just sent
them, maybe the approach works already good enough, and else we have a start
point to work with.

> 
> I am not sure if I signed a CLA already. If not, where should I send it to?

If you did not send it then off...@proxmox.com is the one to send it to.

cheers,
Thomas

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager 0/1] added vm name or hostname to vzdump backup filename, Close #438

2019-11-13 Thread Marco Gabriel
*** BLURB HERE ***

Marco Gabriel (1):
  vzdump: added "includename" option which adds the name or hostname of
a vm or container to a backup filename. Can be used optionally,
defaults to old behaviour without host/vmname. If enabled,
host/vmname will be added after timestamp and before file extension
vzdump command line tool knows --includename with a default of 0.
datacenter -> backup has a checkbox for enabling the feature, also
defaults to not checked (old behaviour)
pve-docs also updated to document the new feature

 PVE/API2/Backup.pm|  6 ++
 PVE/VZDump.pm | 19 ++-
 www/manager6/dc/Backup.js | 10 +-
 3 files changed, 33 insertions(+), 2 deletions(-)

-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Feature: add vm name or ct hostname to backup file name as requested in #438

2019-11-13 Thread Dominik Csapak

On 11/13/19 2:44 PM, Marco Gabriel - inett GmbH wrote:

Hi,


Hi,



long time ago, I requested adding the container hostname or vm name in bugzilla 
request #438. As I wanted to practice developing features for pve and how to 
use the git repos, I implemented the feature. I should have asked before, but I 
haven't. So I am not upset if the feature won't make it into upstream. I'm 
going to try sending them by mail in additional patch mails.


Great, no promises for this, probably depends mostly on the patches 
itself, and how compatible it is...




I am not sure if I signed a CLA already. If not, where should I send it to?


off...@proxmox.com, see [0]



Best regards,
Marco



kind regards
Dominik

0: https://pve.proxmox.com/wiki/Developer_Documentation

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 ct 0/9] mount hotplugging & new mount api

2019-11-13 Thread Oguz Bektas


a few issues , we talked with wolfgang off-list so he'll
send a v3.

* target directories are not created when hotplugging
* apparmor profile needs to be adapted
* missing case for hotplugging unused disks, they are currently
registered as a change and not a new mount, the code doesn't handle that
* nitpicks on function names :P

On Wed, Nov 13, 2019 at 10:33:10AM +0100, Wolfgang Bumiller wrote:
> Changes:
> Add a helper to LXC::PVE::Tools to check for availability of the new
> mount api (new patch 1), and use that in the prestart hook and mount
> functions.
> Add a check to the mount hotplug code to not attempt to perform
> hotplugging on older kernels.
> 
> Wolfgang Bumiller (9):
>   tools: add can_use_new_mount_api helper
>   implement "staged mountpoints"
>   add open_pid_fd, open_lxc_pid, open_ppid helpers
>   split open_namespace out of enter_namespace
>   add get_container_namespace helper
>   add mount stage directory helpers
>   prestart-hook: use staged mountpoints on newer kernels
>   config: vmconfig_apply_pending_mountpoint helper
>   implement mountpoint hotplugging
> 
>  src/PVE/LXC.pm| 180 --
>  src/PVE/LXC/Config.pm |  91 +--
>  src/PVE/LXC/Tools.pm  |  18 
>  src/lxc-pve-prestart-hook |  74 +---
>  4 files changed, 318 insertions(+), 45 deletions(-)
> 
> -- 
> 2.20.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 container 1/9] tools: add can_use_new_mount_api helper

2019-11-13 Thread Thomas Lamprecht
On 11/13/19 1:30 PM, Oguz Bektas wrote:
> hi,
> 
> On Wed, Nov 13, 2019 at 10:33:11AM +0100, Wolfgang Bumiller wrote:
>> Signed-off-by: Wolfgang Bumiller 
>> ---
>> New patch
>>
>>  src/PVE/LXC/Tools.pm | 18 ++
>>  1 file changed, 18 insertions(+)
>>
>> diff --git a/src/PVE/LXC/Tools.pm b/src/PVE/LXC/Tools.pm
>> index bebd7d8..0256b6a 100644
>> --- a/src/PVE/LXC/Tools.pm
>> +++ b/src/PVE/LXC/Tools.pm
>> @@ -2,6 +2,8 @@
>>  
>>  package PVE::LXC::Tools;
>>  
>> +use Errno qw(ENOSYS);
>> +
>>  use PVE::SafeSyslog;
>>  
>>  # LXC introduced an `lxc.hook.version` property which allows hooks to be 
>> executed in different
>> @@ -130,4 +132,20 @@ sub cgroup_do_write($$) {
>>  return 1;
>>  }
>>  
>> +# Check whether the kernel supports the new mount api. This is used in the 
>> pre-start hook and in
>> +# the hotplugging code.
>> +my $cached_can_use_new_mount_api = undef;
>> +sub can_use_new_mount_api() {
> 
> i don't like these names.. isn't can_use_new_mount_api() a bit too
> vague? "new" is also relative, it won't be "new" in a few releases
> 
> maybe something like can_hotplug_mountpoint() would be better,
> describing what it checks in a more verbose way?

can_hotplug is not really better, as it got the direction wrong.
Just because we need fsopen for the way Wolfgang implemnts CT mountpoint
hotplug, it does not means that fsopen can only used for that in the
future. You tend to call checks for the thing you want to do if they
succeed, but most of the time they should be called for what they really
check, if that is a combined check to see if hotplug is possible that name
could be OK, but that isn't such a method.

something in the form of "can_fsopen_syscall" would work better, IMO.

Also, we may want to have a "can do syscall X" check in common?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] Feature: add vm name or ct hostname to backup file name as requested in #438

2019-11-13 Thread Marco Gabriel - inett GmbH
Hi,

long time ago, I requested adding the container hostname or vm name in bugzilla 
request #438. As I wanted to practice developing features for pve and how to 
use the git repos, I implemented the feature. I should have asked before, but I 
haven't. So I am not upset if the feature won't make it into upstream. I'm 
going to try sending them by mail in additional patch mails.

I am not sure if I signed a CLA already. If not, where should I send it to?

Best regards,
Marco


[cid:inett5-126x70_81c2a0fa-28f3-411c-8d4f-6b70748d931f.png]

Ihr IT Systemhaus
in Saarbr?cken

[cid:Facebook_f91e3de6-689a-4957-b6ca-dbf3ca3c7f7b.png][cid:Twitter_ab6c04c5-c40f-43ea-a66b-5114ca78bf2d.png][cid:RSS_170102a2-2aa4-4995-8b7c-d88dd57e8397.png]
Marco Gabriel
Gesch?ftsf?hrer

t: 0681-410993-0
e: mgabr...@inett.de


www.inett.de

Technische Fragen:
Support E-Mail: supp...@inett.de
Support Hotline: 0681-410993-42


inett GmbH | Mainzer Str. 183 | 66121 Saarbr?cken
Gesch?ftsf?hrung: Marco Gabriel | Amtsgericht Saarbr?cken HRB 16588
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH v2 container 1/9] tools: add can_use_new_mount_api helper

2019-11-13 Thread Oguz Bektas
hi,

On Wed, Nov 13, 2019 at 10:33:11AM +0100, Wolfgang Bumiller wrote:
> Signed-off-by: Wolfgang Bumiller 
> ---
> New patch
> 
>  src/PVE/LXC/Tools.pm | 18 ++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/src/PVE/LXC/Tools.pm b/src/PVE/LXC/Tools.pm
> index bebd7d8..0256b6a 100644
> --- a/src/PVE/LXC/Tools.pm
> +++ b/src/PVE/LXC/Tools.pm
> @@ -2,6 +2,8 @@
>  
>  package PVE::LXC::Tools;
>  
> +use Errno qw(ENOSYS);
> +
>  use PVE::SafeSyslog;
>  
>  # LXC introduced an `lxc.hook.version` property which allows hooks to be 
> executed in different
> @@ -130,4 +132,20 @@ sub cgroup_do_write($$) {
>  return 1;
>  }
>  
> +# Check whether the kernel supports the new mount api. This is used in the 
> pre-start hook and in
> +# the hotplugging code.
> +my $cached_can_use_new_mount_api = undef;
> +sub can_use_new_mount_api() {

i don't like these names.. isn't can_use_new_mount_api() a bit too
vague? "new" is also relative, it won't be "new" in a few releases

maybe something like can_hotplug_mountpoint() would be better,
describing what it checks in a more verbose way?

> +if (!defined($cached_can_use_new_mount_api)) {
> + if (PVE::Tools::fsopen(0, 0)) {
> + # This should not be possible...
> + die "kernel behaved unexpectedly: fsopen(NULL, 0) did not fail!\n";
> + }
> + # On older kernels the syscall doesn't exist and we get ENOSYS. (For 
> newer kernels this call
> + # will fail with EFAULT instead, since we pass in a NULL pointer as 
> file system name.)
> + $cached_can_use_new_mount_api = ($! != ENOSYS);
> +}
> +return $cached_can_use_new_mount_api;
> +}
> +
>  1;
> -- 
> 2.20.1
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] Use crm-command stop to allow shutdown with timeout and hard stop for HA

2019-11-13 Thread Fabian Ebner

On 11/13/19 9:55 AM, Thomas Lamprecht wrote:

On 11/12/19 11:03 AM, Fabian Ebner wrote:

The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a
value of 0 would trigger a hard stop for HA managed VMs. Like this the API
description stays valid for all cases.

Signed-off-by: Fabian Ebner 
---

In vm_shutdown we'd like to pass along the timeout parameter to the HA stack,
but the value 0 triggers a special behavior there, namely a hard stop.
So either the description needs to be changed to mention this behavior or
we just don't allow a timeout of 0 in vm_shutdown. Since a shutdown with
timeout 0 doesn't really make sense anyways, I opted for the latter.
It's the same situation for containers.



timeout == 0 just means instant stop, I did not checked this, but as
both CTs and VMs allow to pass 0 it really smells like it was done by
design.. ^^

Also, limiting API value ranges could be seen as API breakage, and that
should be avoided if possible..

What was the behaviour of passing this for a non-HA VM/CT?



If I interpreted the code correctly:

* For 'qm shutdown --timeout=0':
** For non-HA managed VMs it leads to (depending on whether guest agent 
is enabled) a qmp command 'guest_shutdown' or 'system_powerdown'.
Then (since timeout is 0 this happens immediately) depending on --force 
we either kill with SIGTERM or die with "got timeout".
** For HA managed VMs 'qm shutdown --timeout=0' turns into a vm_stop 
issuing a qmp command 'quit' and we actually wait 60 seconds for that 
command.



* For 'pct shutdown --timeout=0':
** For non-HA managed containers we always give an extra 5 seconds of 
timeout, so 5 seconds here.

** For HA managed containers it will result in an immediate kill.

I mean it probably doesn't really matter, there was a discrepancy 
between what happens for HA managed and non-HA managed VM/CT already.

I can send a v2 without the new minima if you like.


  PVE/API2/Qemu.pm | 7 ---
  1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c31dd1d..16a7a04 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2111,7 +2111,7 @@ __PACKAGE__->register_method({
  
  		print "Requesting HA stop for VM $vmid\n";
  
-		my $cmd = ['ha-manager', 'set',  "vm:$vmid", '--state', 'stopped'];

+   my $cmd = ['ha-manager', 'crm-command', 'stop',  "vm:$vmid", 
'0'];
PVE::Tools::run_command($cmd);
return;
};
@@ -2204,7 +2204,7 @@ __PACKAGE__->register_method({
timeout => {
description => "Wait maximal timeout seconds.",
type => 'integer',
-   minimum => 0,
+   minimum => 1,
optional => 1,
},
forceStop => {
@@ -2267,12 +2267,13 @@ __PACKAGE__->register_method({
  
  	if (PVE::HA::Config::vm_is_ha_managed($vmid) && $rpcenv->{type} ne 'ha') {
  
+	my $timeout = $param->{timeout} // 60;

my $hacmd = sub {
my $upid = shift;
  
  		print "Requesting HA stop for VM $vmid\n";
  
-		my $cmd = ['ha-manager', 'set', "vm:$vmid", '--state', 'stopped'];

+   my $cmd = ['ha-manager', 'crm-command', 'stop', "vm:$vmid", 
"$timeout"];
PVE::Tools::run_command($cmd);
return;
};





___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 1/6] add reboot helpers to be used by containers

2019-11-13 Thread Thomas Lamprecht
On 11/12/19 5:26 PM, Oguz Bektas wrote:
> there _is_ actually reboot triggers for lxc which are used by the
> prestart hook and similar, however i think it's better if we can
> differentiate between reboot requests from inside the container and from
> API. (inadvertently this also allows to reboot container from inside without
> applying pending changes)


why would I want above? Pending changes wait to be applied
as soon as possible!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 ct 0/9] mount hotplugging & new mount api

2019-11-13 Thread Wolfgang Bumiller
Changes:
Add a helper to LXC::PVE::Tools to check for availability of the new
mount api (new patch 1), and use that in the prestart hook and mount
functions.
Add a check to the mount hotplug code to not attempt to perform
hotplugging on older kernels.

Wolfgang Bumiller (9):
  tools: add can_use_new_mount_api helper
  implement "staged mountpoints"
  add open_pid_fd, open_lxc_pid, open_ppid helpers
  split open_namespace out of enter_namespace
  add get_container_namespace helper
  add mount stage directory helpers
  prestart-hook: use staged mountpoints on newer kernels
  config: vmconfig_apply_pending_mountpoint helper
  implement mountpoint hotplugging

 src/PVE/LXC.pm| 180 --
 src/PVE/LXC/Config.pm |  91 +--
 src/PVE/LXC/Tools.pm  |  18 
 src/lxc-pve-prestart-hook |  74 +---
 4 files changed, 318 insertions(+), 45 deletions(-)

-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 1/9] tools: add can_use_new_mount_api helper

2019-11-13 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
New patch

 src/PVE/LXC/Tools.pm | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/src/PVE/LXC/Tools.pm b/src/PVE/LXC/Tools.pm
index bebd7d8..0256b6a 100644
--- a/src/PVE/LXC/Tools.pm
+++ b/src/PVE/LXC/Tools.pm
@@ -2,6 +2,8 @@
 
 package PVE::LXC::Tools;
 
+use Errno qw(ENOSYS);
+
 use PVE::SafeSyslog;
 
 # LXC introduced an `lxc.hook.version` property which allows hooks to be 
executed in different
@@ -130,4 +132,20 @@ sub cgroup_do_write($$) {
 return 1;
 }
 
+# Check whether the kernel supports the new mount api. This is used in the 
pre-start hook and in
+# the hotplugging code.
+my $cached_can_use_new_mount_api = undef;
+sub can_use_new_mount_api() {
+if (!defined($cached_can_use_new_mount_api)) {
+   if (PVE::Tools::fsopen(0, 0)) {
+   # This should not be possible...
+   die "kernel behaved unexpectedly: fsopen(NULL, 0) did not fail!\n";
+   }
+   # On older kernels the syscall doesn't exist and we get ENOSYS. (For 
newer kernels this call
+   # will fail with EFAULT instead, since we pass in a NULL pointer as 
file system name.)
+   $cached_can_use_new_mount_api = ($! != ENOSYS);
+}
+return $cached_can_use_new_mount_api;
+}
+
 1;
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 8/9] config: vmconfig_apply_pending_mountpoint helper

2019-11-13 Thread Wolfgang Bumiller
for reuse in hotplug code

Signed-off-by: Wolfgang Bumiller 
---
No changes.

 src/PVE/LXC/Config.pm | 65 ++-
 1 file changed, 39 insertions(+), 26 deletions(-)

diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 39de691..44d7f93 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1242,15 +1242,6 @@ sub vmconfig_apply_pending {
warn $err_msg;
 };
 
-my $rescan_volume = sub {
-   my ($mp) = @_;
-   eval {
-   $mp->{size} = PVE::Storage::volume_size_info($storecfg, 
$mp->{volume}, 5)
-   if !defined($mp->{size});
-   };
-   warn "Could not rescan volume size - $@\n" if $@;
-};
-
 my $pending_delete_hash = 
$class->parse_pending_delete($conf->{pending}->{delete});
 # FIXME: $force deletion is not implemented for CTs
 foreach my $opt (sort keys %$pending_delete_hash) {
@@ -1281,23 +1272,7 @@ sub vmconfig_apply_pending {
next if $selection && !$selection->{$opt};
eval {
if ($opt =~ m/^mp(\d+)$/) {
-   my $mp = $class->parse_ct_mountpoint($conf->{pending}->{$opt});
-   my $old = $conf->{$opt};
-   if ($mp->{type} eq 'volume') {
-   if ($mp->{volume} =~ $PVE::LXC::NEW_DISK_RE) {
-   PVE::LXC::create_disks($storecfg, $vmid, { $opt => 
$conf->{pending}->{$opt} }, $conf, 1);
-   } else {
-   $rescan_volume->($mp);
-   $conf->{pending}->{$opt} = 
$class->print_ct_mountpoint($mp);
-   }
-   }
-   if (defined($old)) {
-   my $mp = $class->parse_ct_mountpoint($old);
-   if ($mp->{type} eq 'volume') {
-   $class->add_unused_volume($conf, $mp->{volume})
-   if !$class->is_volume_in_use($conf, $conf->{$opt}, 
1, 1);
-   }
-   }
+   $class->vmconfig_apply_pending_mountpoint($vmid, $conf, $opt, 
$storecfg, 0);
} elsif ($opt =~ m/^net(\d+)$/) {
my $netid = $1;
my $net = $class->parse_lxc_network($conf->{pending}->{$opt});
@@ -1315,6 +1290,44 @@ sub vmconfig_apply_pending {
 $class->write_config($vmid, $conf);
 }
 
+my $rescan_volume = sub {
+my ($storecfg, $mp) = @_;
+eval {
+   $mp->{size} = PVE::Storage::volume_size_info($storecfg, $mp->{volume}, 
5)
+   if !defined($mp->{size});
+};
+warn "Could not rescan volume size - $@\n" if $@;
+};
+
+sub vmconfig_apply_pending_mountpoint {
+my ($class, $vmid, $conf, $opt, $storecfg, $running) = @_;
+
+my $mp = $class->parse_ct_mountpoint($conf->{pending}->{$opt});
+my $old = $conf->{$opt};
+if ($mp->{type} eq 'volume') {
+   if ($mp->{volume} =~ $PVE::LXC::NEW_DISK_RE) {
+   my $vollist = PVE::LXC::create_disks(
+   $storecfg,
+   $vmid,
+   { $opt => $conf->{pending}->{$opt} },
+   $conf,
+   1,
+   );
+   } else {
+   $rescan_volume->($storecfg, $mp);
+   $conf->{pending}->{$opt} = $class->print_ct_mountpoint($mp);
+   }
+}
+
+if (defined($old)) {
+   my $mp = $class->parse_ct_mountpoint($old);
+   if ($mp->{type} eq 'volume') {
+   $class->add_unused_volume($conf, $mp->{volume})
+   if !$class->is_volume_in_use($conf, $conf->{$opt}, 1, 1);
+   }
+}
+}
+
 sub classify_mountpoint {
 my ($class, $vol) = @_;
 if ($vol =~ m!^/!) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 9/9] implement mountpoint hotplugging

2019-11-13 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
Changes to v1:
  Use the new can_use_new_mount_api() to prevent mp hotplug attempts on
  older kernels.

 src/PVE/LXC.pm| 44 +++
 src/PVE/LXC/Config.pm | 28 ++-
 2 files changed, 71 insertions(+), 1 deletion(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 77b1a43..c022355 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1625,6 +1625,50 @@ sub __mountpoint_mount {
 die "unsupported storage";
 }
 
+sub mountpoint_hotplug($$$) {
+my ($vmid, $conf, $opt, $mp, $storage_cfg) = @_;
+
+my (undef, $rootuid, $rootgid) = PVE::LXC::parse_id_maps($conf);
+
+# We do the rest in a fork with an unshared mount namespace, since we're 
now going to 'stage'
+# the mountpoint, then grab it, then move into the container's namespace, 
then mount it.
+
+PVE::Tools::run_fork(sub {
+   # Pin the container pid longer, we also need to get its monitor/parent:
+   my ($ct_pid, $ct_pidfd) = open_lxc_pid($vmid)
+   or die "failed to open pidfd of container $vmid\'s init process\n";
+
+   my ($monitor_pid, $monitor_pidfd) = open_ppid($ct_pid)
+   or die "failed to open pidfd of container $vmid\'s monitor 
process\n";
+
+   my $ct_mnt_ns = $get_container_namespace->($vmid, $ct_pid, 'mnt');
+   my $monitor_mnt_ns = $get_container_namespace->($vmid, $monitor_pid, 
'mnt');
+
+   # Change into the monitor's mount namespace. We "pin" the mount into 
the monitor's
+   # namespace for it to remain active there since the container will be 
able to unmount
+   # hotplugged mount points and thereby potentially free up loop devices, 
which is a security
+   # concern.
+   PVE::Tools::setns(fileno($monitor_mnt_ns), PVE::Tools::CLONE_NEWNS);
+   chdir('/')
+   or die "failed to change root directory within the monitor's mount 
namespace: $!\n";
+
+   my $dir = get_staging_mount_path($opt);
+   my $mount_fd = mountpoint_stage($mp, $dir, $storage_cfg, undef, 
$rootuid, $rootgid);
+
+   PVE::Tools::setns(fileno($ct_mnt_ns), PVE::Tools::CLONE_NEWNS);
+   chdir('/')
+   or die "failed to change root directory within the container's 
mount namespace: $!\n";
+
+   PVE::Tools::move_mount(
+   fileno($mount_fd),
+   "",
+   _FDCWD,
+   $mp->{mp},
+   _MOUNT_F_EMPTY_PATH,
+   );
+});
+}
+
 # Create a directory in the mountpoint staging tempfs.
 sub get_staging_mount_path($) {
 my ($opt) = @_;
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index 44d7f93..ecf60ef 100644
--- a/src/PVE/LXC/Config.pm
+++ b/src/PVE/LXC/Config.pm
@@ -1217,6 +1217,14 @@ sub vmconfig_hotplug_pending {
if (!$hotplug_memory_done) { # don't call twice if both opts 
are passed
$hotplug_memory->($conf->{pending}->{memory}, 
$conf->{pending}->{swap});
}
+   } elsif ($opt =~ m/^mp(\d+)$/) {
+   if (!PVE::LXC::Tools::can_use_new_mount_api()) {
+   die "skip\n";
+   }
+
+   $class->vmconfig_apply_pending_mountpoint($vmid, $conf, $opt, 
$storecfg, 1);
+   # vmconfig_apply_pending_mountpoint modifies the value if it 
creates a new disk
+   $value = $conf->{pending}->{$opt};
} else {
die "skip\n"; # skip non-hotpluggable
}
@@ -1306,14 +1314,32 @@ sub vmconfig_apply_pending_mountpoint {
 my $old = $conf->{$opt};
 if ($mp->{type} eq 'volume') {
if ($mp->{volume} =~ $PVE::LXC::NEW_DISK_RE) {
+   my $original_value = $conf->{pending}->{$opt};
my $vollist = PVE::LXC::create_disks(
$storecfg,
$vmid,
-   { $opt => $conf->{pending}->{$opt} },
+   { $opt => $original_value },
$conf,
1,
);
+   if ($running) {
+   # Re-parse mount point:
+   my $mp = $class->parse_ct_mountpoint($conf->{pending}->{$opt});
+   eval {
+   PVE::LXC::mountpoint_hotplug($vmid, $conf, $opt, $mp, 
$storecfg);
+   };
+   my $err = $@;
+   if ($err) {
+   PVE::LXC::destroy_disks($storecfg, $vollist);
+   # The pending-changes code collects errors but keeps on 
looping through further
+   # pending changes, so unroll the change in $conf as well if 
destroy_disks()
+   # didn't die().
+   $conf->{pending}->{$opt} = $original_value;
+   die $err;
+   }
+   }
} else {
+   die "skip\n" if $running; # TODO: "changing" mount points
$rescan_volume->($storecfg, $mp);
$conf->{pending}->{$opt} = $class->print_ct_mountpoint($mp);
}
-- 
2.20.1



[pve-devel] [PATCH v2 container 5/9] add get_container_namespace helper

2019-11-13 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
No changes.

 src/PVE/LXC.pm | 13 +
 1 file changed, 13 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index c07a597..6bea0b7 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1048,6 +1048,19 @@ my $enter_namespace = sub {
 close $fd;
 };
 
+my $get_container_namespace = sub {
+my ($vmid, $pid, $kind) = @_;
+
+my $pidfd;
+if (!defined($pid)) {
+   # Pin the pid while we're grabbing its stuff from /proc
+   ($pid, $pidfd) = open_lxc_pid($vmid)
+   or die "failed to open pidfd of container $vmid\'s init process\n";
+}
+
+return $open_namespace->($vmid, $pid, $kind);
+};
+
 my $do_syncfs = sub {
 my ($vmid, $pid, $socket) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 7/9] prestart-hook: use staged mountpoints on newer kernels

2019-11-13 Thread Wolfgang Bumiller
This way we operate on defined paths in the monitor
namespace (/run/pve/mountpoint/{rootfs,mp0,mp1,...}) while
performing the mount, and can use `move_mount()` without
passing the MOVE_MOUNT_T_SYMLINKS flag when putting the
hierarchy in place.

Signed-off-by: Wolfgang Bumiller 
---
Changes to v1:
  Use the new can_use_new_mount_api() helper.

 src/lxc-pve-prestart-hook | 74 +--
 1 file changed, 63 insertions(+), 11 deletions(-)

diff --git a/src/lxc-pve-prestart-hook b/src/lxc-pve-prestart-hook
index c0965ab..d1c0414 100755
--- a/src/lxc-pve-prestart-hook
+++ b/src/lxc-pve-prestart-hook
@@ -5,9 +5,9 @@ package lxc_pve_prestart_hook;
 use strict;
 use warnings;
 
-use POSIX;
+use Fcntl qw(O_DIRECTORY :mode);
 use File::Path;
-use Fcntl ':mode';
+use POSIX;
 
 use PVE::Cluster;
 use PVE::LXC::Config;
@@ -15,7 +15,8 @@ use PVE::LXC::Setup;
 use PVE::LXC::Tools;
 use PVE::LXC;
 use PVE::Storage;
-use PVE::Tools;
+use PVE::Syscall qw(:fsmount);
+use PVE::Tools qw(AT_FDCWD O_PATH);
 
 PVE::LXC::Tools::lxc_hook('pre-start', 'lxc', sub {
 my ($vmid, $vars, undef, undef) = @_;
@@ -51,19 +52,70 @@ PVE::LXC::Tools::lxc_hook('pre-start', 'lxc', sub {
 
 my (undef, $rootuid, $rootgid) = PVE::LXC::parse_id_maps($conf);
 
-my $setup_mountpoint = sub {
-   my ($ms, $mountpoint) = @_;
-
-   #return if $ms eq 'rootfs';
-   my (undef, undef, $dev) = PVE::LXC::mountpoint_mount($mountpoint, 
$rootdir, $storage_cfg, undef, $rootuid, $rootgid);
-   push @$devices, $dev if $dev && $mountpoint->{quota};
-};
-
 # Unmount first when the user mounted the container with "pct mount".
 eval {
PVE::Tools::run_command(['umount', '--recursive', $rootdir], outfunc => 
sub {}, errfunc => sub {});
 };
 
+my $setup_mountpoint;
+if (!PVE::LXC::Tools::can_use_new_mount_api()) {
+   # Legacy mode for old kernels:
+   $setup_mountpoint = sub {
+   my ($opt, $mountpoint) = @_;
+
+   my (undef, undef, $dev) = PVE::LXC::mountpoint_mount(
+   $mountpoint,
+   $rootdir,
+   $storage_cfg,
+   undef,
+   $rootuid,
+   $rootgid,
+   );
+   push @$devices, $dev if $dev && $mountpoint->{quota};
+   };
+} else {
+   # With newer kernels we stage mount points and then use move_mount().
+   my $rootdir_fd = undef;
+   $setup_mountpoint = sub {
+   my ($opt, $mountpoint) = @_;
+
+   my $dir = PVE::LXC::get_staging_mount_path($opt);
+   my (undef, undef, $dev, $mount_fd) = PVE::LXC::mountpoint_stage(
+   $mountpoint,
+   $dir,
+   $storage_cfg,
+   undef,
+   $rootuid,
+   $rootgid,
+   );
+
+   my ($dfd, $ddir);
+   if ($rootdir_fd) {
+   $dfd = fileno($rootdir_fd);
+   $ddir = './' . $mountpoint->{mp};
+   } else {
+   # Assert that 'rootfs' is the first one:
+   die "foreach_mount() error\n" if $opt ne 'rootfs';
+
+   # $rootdir is not controlled by the container, so we can use it 
directly
+   $dfd = AT_FDCWD;
+   $ddir = $rootdir;
+   }
+
+   # NOTE: when we have openat2() available, even better: use that 
with fd-as-chroot mode
+   # and MOVE_MOUNT_T_EMPTY_PATH...
+   PVE::Tools::move_mount(fileno($mount_fd), '', $dfd, $ddir, 
_MOUNT_F_EMPTY_PATH)
+   or die "failed to move '$opt' into container hierarchy: $!\n";
+
+   # From now on we mount inside our rootfs:
+   if (!$rootdir_fd) {
+   $rootdir_fd = $mount_fd;
+   }
+
+   push @$devices, $dev if $dev && $mountpoint->{quota};
+   };
+}
+
 PVE::LXC::Config->foreach_mountpoint($conf, $setup_mountpoint);
 
 my $lxc_setup = PVE::LXC::Setup->new($conf, $rootdir);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 6/9] add mount stage directory helpers

2019-11-13 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
No changes.

 src/PVE/LXC.pm | 29 -
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 6bea0b7..77b1a43 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -11,7 +11,7 @@ use File::Path;
 use File::Spec;
 use Cwd qw();
 use Fcntl qw(O_RDONLY O_NOFOLLOW O_DIRECTORY);
-use Errno qw(ELOOP ENOTDIR EROFS ECONNREFUSED ENOSYS);
+use Errno qw(ELOOP ENOTDIR EROFS ECONNREFUSED ENOSYS EEXIST);
 use IO::Socket::UNIX;
 
 use PVE::Exception qw(raise_perm_exc);
@@ -1625,6 +1625,33 @@ sub __mountpoint_mount {
 die "unsupported storage";
 }
 
+# Create a directory in the mountpoint staging tempfs.
+sub get_staging_mount_path($) {
+my ($opt) = @_;
+
+my $target = get_staging_tempfs() . "/$opt";
+if (!mkdir($target) && $! != EEXIST) {
+   die "failed to create directory $target: $!\n";
+}
+
+return $target;
+}
+
+# Mount /run/pve/mountpoints as tmpfs
+sub get_staging_tempfs() {
+my $target = '/run/pve/mountpoints';
+mkdir("/run/pve");
+if (!mkdir($target)) {
+   return $target if $! == EEXIST;
+   die "failed to create directory $target: $!\n";
+}
+
+PVE::Tools::mount("none", $target, 'tmpfs', 0, "size=8k,mode=755")
+   or die "failed to mount $target as tmpfs: $!\n";
+
+return $target;
+}
+
 sub mkfs {
 my ($dev, $rootuid, $rootgid) = @_;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 3/9] add open_pid_fd, open_lxc_pid, open_ppid helpers

2019-11-13 Thread Wolfgang Bumiller
Getting a pid and acting on it is always a race, so add
safer helpers for this.

Signed-off-by: Wolfgang Bumiller 
---
No changes.

 src/PVE/LXC.pm | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index ea54518..c51e59e 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -388,6 +388,44 @@ sub find_lxc_pid {
 return $pid;
 }
 
+sub open_pid_fd($) {
+my ($pid) = @_;
+sysopen(my $fd, "/proc/$pid", O_RDONLY | O_DIRECTORY)
+   or die "failed to open /proc/$pid pid fd\n";
+return $fd;
+}
+
+sub open_lxc_pid {
+my ($vmid) = @_;
+
+# Find the pid and open:
+my $pid = find_lxc_pid($vmid);
+my $fd = open_pid_fd($pid);
+
+# Verify:
+my $pid2 = find_lxc_pid($vmid);
+
+return () if $pid != $pid2;
+return ($pid, $fd);
+}
+
+sub open_ppid {
+my ($pid) = @_;
+
+# Find the parent pid via proc and open it:
+my $stat = PVE::ProcFSTools::read_proc_pid_stat($pid);
+my $ppid = $stat->{ppid} // die "failed to get parent pid\n";
+
+my $fd = open_pid_fd($ppid);
+
+# Verify:
+$stat = PVE::ProcFSTools::read_proc_pid_stat($pid);
+my $ppid2 = $stat->{ppid} // die "failed to get parent pid for 
verification\n";
+
+return () if $ppid != $ppid2;
+return ($ppid, $fd);
+}
+
 # Note: we cannot use Net:IP, because that only allows strict
 # CIDR networks
 sub parse_ipv4_cidr {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 2/9] implement "staged mountpoints"

2019-11-13 Thread Wolfgang Bumiller
Staging a mount point requires the new kernel mount API and
will mount the volume at a fixed path, then use open_tree()
to "pick it up" into a file descriptor.

For most of our volumes we wouldn't need the temp directory,
but some things cannot be handled with _only_ the new API
(like single-step read-only bind mounts). Additionally, the
'mount' command figures out file systems automatically and
has a bunch of helpers we'd need to reimplement, so instead,
go through our usual mount code and then pick up the result.

This can then be used to implement mount point hotplugging,
as with the open file descriptor we can move into the
container's namespace and issue a `move_mount()` to put the
mount point in place in the running container.

Signed-off-by: Wolfgang Bumiller 
---
Changes to v1:
  Use the new can_use_new_mount_api() helper

 src/PVE/LXC.pm | 44 
 1 file changed, 40 insertions(+), 4 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index cdf6d64..ea54518 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -11,7 +11,7 @@ use File::Path;
 use File::Spec;
 use Cwd qw();
 use Fcntl qw(O_RDONLY O_NOFOLLOW O_DIRECTORY);
-use Errno qw(ELOOP ENOTDIR EROFS ECONNREFUSED);
+use Errno qw(ELOOP ENOTDIR EROFS ECONNREFUSED ENOSYS);
 use IO::Socket::UNIX;
 
 use PVE::Exception qw(raise_perm_exc);
@@ -19,14 +19,15 @@ use PVE::Storage;
 use PVE::SafeSyslog;
 use PVE::INotify;
 use PVE::JSONSchema qw(get_standard_option);
-use PVE::Tools qw($IPV6RE $IPV4RE dir_glob_foreach lock_file lock_file_full 
O_PATH);
+use PVE::Tools qw($IPV6RE $IPV4RE dir_glob_foreach lock_file lock_file_full 
O_PATH AT_FDCWD);
 use PVE::CpuSet;
 use PVE::Network;
 use PVE::AccessControl;
 use PVE::ProcFSTools;
-use PVE::Syscall;
+use PVE::Syscall qw(:fsmount);
 use PVE::LXC::Config;
 use PVE::GuestHelpers;
+use PVE::LXC::Tools;
 
 use Time::HiRes qw (gettimeofday);
 
@@ -1397,9 +1398,40 @@ sub __mount_prepare_rootdir {
 return ($rootdir, $mount_path, $mpfd, $parentfd, $last_dir);
 }
 
-# use $rootdir = undef to just return the corresponding mount path
+# use $rootdir = undef to just return the corresponding mount path,
 sub mountpoint_mount {
 my ($mountpoint, $rootdir, $storage_cfg, $snapname, $rootuid, $rootgid) = 
@_;
+return __mountpoint_mount($mountpoint, $rootdir, $storage_cfg, $snapname, 
$rootuid, $rootgid, undef);
+}
+
+sub mountpoint_stage {
+my ($mountpoint, $stage_dir, $storage_cfg, $snapname, $rootuid, $rootgid) 
= @_;
+my ($path, $loop, $dev) =
+   __mountpoint_mount($mountpoint, $stage_dir, $storage_cfg, $snapname, 
$rootuid, $rootgid, 1);
+
+if (!defined($path)) {
+   return undef if $! == ENOSYS;
+   die "failed to mount subvolume: $!\n";
+}
+
+my $err;
+my $fd = PVE::Tools::open_tree(_FDCWD, $stage_dir, _TREE_CLOEXEC | 
_TREE_CLONE)
+   or die "open_tree() on mount point failed: $!\n";
+
+return wantarray ? ($path, $loop, $dev, $fd) : $fd;
+}
+
+# Use $stage_mount, $rootdir is treated as a temporary path to "stage" the 
file system. The user
+#   can then open a file descriptor to it which can be used with the 
`move_mount` syscall.
+#   Note that if the kernel does not support the new mount API, this will not 
perform any action
+#   and return `undef` with $! = ENOSYS.
+sub __mountpoint_mount {
+my ($mountpoint, $rootdir, $storage_cfg, $snapname, $rootuid, $rootgid, 
$stage_mount) = @_;
+
+if (defined($stage_mount) && !PVE::LXC::Tools::can_use_new_mount_api()) {
+   $! = ENOSYS;
+   return undef;
+}
 
 my $volid = $mountpoint->{volume};
 my $mount = $mountpoint->{mp};
@@ -1418,6 +1450,10 @@ sub mountpoint_mount {
($rootdir, $mount_path, $mpfd, $parentfd, $last_dir) =
__mount_prepare_rootdir($rootdir, $mount, $rootuid, $rootgid);
 }
+
+if (defined($stage_mount)) {
+   $mount_path = $rootdir;
+}
 
 my ($storage, $volname) = PVE::Storage::parse_volume_id($volid, 1);
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2 container 4/9] split open_namespace out of enter_namespace

2019-11-13 Thread Wolfgang Bumiller
Signed-off-by: Wolfgang Bumiller 
---
No changes.

 src/PVE/LXC.pm | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index c51e59e..c07a597 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -1033,12 +1033,18 @@ sub update_ipconfig {
 
 }
 
+my $open_namespace = sub {
+my ($vmid, $pid, $kind) = @_;
+sysopen my $fd, "/proc/$pid/ns/$kind", O_RDONLY
+   or die "failed to open $kind namespace of container $vmid: $!\n";
+return $fd;
+};
+
 my $enter_namespace = sub {
-my ($vmid, $pid, $which, $type) = @_;
-sysopen my $fd, "/proc/$pid/ns/$which", O_RDONLY
-   or die "failed to open $which namespace of container $vmid: $!\n";
+my ($vmid, $pid, $kind, $type) = @_;
+my $fd = $open_namespace->($vmid, $pid, $kind);
 PVE::Tools::setns(fileno($fd), $type)
-   or die "failed to enter $which namespace of container $vmid: $!\n";
+   or die "failed to enter $kind namespace of container $vmid: $!\n";
 close $fd;
 };
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager] fix #2462: ACMEAccount: make tos in get_tos optional

2019-11-13 Thread Dominik Csapak

we maybe want to backport this for pve5 also.. (should apply i guess)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] fix #2462: ACMEAccount: make tos in get_tos optional

2019-11-13 Thread Dominik Csapak
the code returns undef in case there is no 'tos', and the code
calling this api call handles a non-existing tos already, but
fails in that case becasue of the failing return value verification

Signed-off-by: Dominik Csapak 
---
 PVE/API2/ACMEAccount.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/PVE/API2/ACMEAccount.pm b/PVE/API2/ACMEAccount.pm
index 90620dea..29d5dcf3 100644
--- a/PVE/API2/ACMEAccount.pm
+++ b/PVE/API2/ACMEAccount.pm
@@ -322,6 +322,7 @@ __PACKAGE__->register_method ({
 },
 returns => {
type => 'string',
+   optional => 1,
description => 'ACME TermsOfService URL.',
 },
 code => sub {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH container 1/6] add reboot helpers to be used by containers

2019-11-13 Thread Stefan Reiter

On 11/12/19 5:26 PM, Oguz Bektas wrote:

code for create_reboot_request and clear_reboot_request is from qemu, the only
difference is that we use /run/lxc/$vmid.reboot path instead of
/run/qemu-server.

there _is_ actually reboot triggers for lxc which are used by the
prestart hook and similar, however i think it's better if we can
differentiate between reboot requests from inside the container and from
API. (inadvertently this also allows to reboot container from inside without
applying pending changes)

since we don't have to clean up like in qemu, we can simply run vm_stop
and vm_start for a reboot operation.



That being so, why do we need a restart trigger file at all? You create 
it in vm_reboot and then delete it in vm_start or the error path... But 
it existing or not doesn't change behaviour of any of these?



Signed-off-by: Oguz Bektas 
---
  src/PVE/LXC.pm | 41 +
  1 file changed, 41 insertions(+)

diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index cdf6d64..c77ee01 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -2013,6 +2013,47 @@ sub vm_stop {
  die "container did not stop\n";
  }
  
+sub create_reboot_request {

+my ($vmid) = @_;
+open(my $fh, '>', "/run/lxc/$vmid.reboot")
+   or die "failed to create reboot trigger file: $!\n";
+close($fh);
+}
+
+sub clear_reboot_request {
+my ($vmid) = @_;
+my $path = "/run/lxc/$vmid.reboot";
+my $res = 0;
+
+if (-e $path) {
+   $res = unlink($path);
+   die "could not remove reboot request for $vmid: $!\n" if !$res;
+}
+
+return $res;
+}
+
+sub vm_reboot {
+my ($vmid, $timeout, $skiplock) = @_;
+
+PVE::LXC::Config->lock_config($vmid, sub {
+   eval {
+   return if !check_running($vmid);
+
+   create_reboot_request($vmid);
+   vm_stop($vmid, 0, $timeout, 1); # kill if timeout exceeds
+
+   my $conf = PVE::LXC::Config->load_config($vmid);
+   vm_start($vmid, $conf);
+   };
+   if (my $err = $@) {
+   # avoid that the next normal shutdown will be confused for a reboot
+   clear_reboot_request($vmid);
+   die $err;
+   }
+});
+}
+
  sub run_unshared {
  my ($code) = @_;
  



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH qemu-server] Use crm-command stop to allow shutdown with timeout and hard stop for HA

2019-11-13 Thread Thomas Lamprecht
On 11/12/19 11:03 AM, Fabian Ebner wrote:
> The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a
> value of 0 would trigger a hard stop for HA managed VMs. Like this the API
> description stays valid for all cases.
> 
> Signed-off-by: Fabian Ebner 
> ---
> 
> In vm_shutdown we'd like to pass along the timeout parameter to the HA stack,
> but the value 0 triggers a special behavior there, namely a hard stop.
> So either the description needs to be changed to mention this behavior or
> we just don't allow a timeout of 0 in vm_shutdown. Since a shutdown with
> timeout 0 doesn't really make sense anyways, I opted for the latter.
> It's the same situation for containers.
> 

timeout == 0 just means instant stop, I did not checked this, but as
both CTs and VMs allow to pass 0 it really smells like it was done by
design.. ^^

Also, limiting API value ranges could be seen as API breakage, and that
should be avoided if possible..

What was the behaviour of passing this for a non-HA VM/CT?

>  PVE/API2/Qemu.pm | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index c31dd1d..16a7a04 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -2111,7 +2111,7 @@ __PACKAGE__->register_method({
>  
>   print "Requesting HA stop for VM $vmid\n";
>  
> - my $cmd = ['ha-manager', 'set',  "vm:$vmid", '--state', 
> 'stopped'];
> + my $cmd = ['ha-manager', 'crm-command', 'stop',  "vm:$vmid", 
> '0'];
>   PVE::Tools::run_command($cmd);
>   return;
>   };
> @@ -2204,7 +2204,7 @@ __PACKAGE__->register_method({
>   timeout => {
>   description => "Wait maximal timeout seconds.",
>   type => 'integer',
> - minimum => 0,
> + minimum => 1,
>   optional => 1,
>   },
>   forceStop => {
> @@ -2267,12 +2267,13 @@ __PACKAGE__->register_method({
>  
>   if (PVE::HA::Config::vm_is_ha_managed($vmid) && $rpcenv->{type} ne 
> 'ha') {
>  
> + my $timeout = $param->{timeout} // 60;
>   my $hacmd = sub {
>   my $upid = shift;
>  
>   print "Requesting HA stop for VM $vmid\n";
>  
> - my $cmd = ['ha-manager', 'set', "vm:$vmid", '--state', 
> 'stopped'];
> + my $cmd = ['ha-manager', 'crm-command', 'stop', "vm:$vmid", 
> "$timeout"];
>   PVE::Tools::run_command($cmd);
>   return;
>   };
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH ct/common] mount point hotplugging & new mount api

2019-11-13 Thread Wolfgang Bumiller
On Tue, Nov 12, 2019 at 03:09:27PM +0100, Oguz Bektas wrote:
> hi,
> 
> built the latest git version of pve-common and pve-container with
> wolfgang's patches.
> 
> with running kernel: 5.0.21-4-pve
> and the latest pve-kernel-5.3
> 
> found a small issue while testing.
> 
> when one has an older kernel and tries to hotplug a mountpoint
> 
> ---
> Parameter verification failed. (400)
> 
> mp1: unable to hotplug mp1: Can't use an undefined value as a symbol
> reference at /usr/share/perl5/PVE/LXC.pm line 1670. 
> ---
> 
> and around this line is:
> 
> ---
> 
> PVE::Tools::move_mount(
> fileno($mount_fd),
> "",
> _FDCWD,
> $mp->{mp},
> _MOUNT_F_EMPTY_PATH, # line 1670
> );
> });
> }
> 
> ---
> 
> so i'm guessing since that syscall doesn't exist in older kernels,
> we get an undefined value.
> 
> it would be better to handle this error with something more
> user-friendly and verbose.
> 
> i'm still testing and will write here if i notice something else.

Right, forgot to add the kernel check there. The whole function
shouldn't be executed on older kernels.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel