[pve-devel] applied: [PATCH storage] cephfs: make is_mounted check less strict

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 9:42 AM, Dominik Csapak wrote:
> checking '$server:$subdir' is too strict to work in all cirumcstances,
> e.g. adding/removing a monitor would mean that it is not the same
> anymore, same if one is adding/removing the ports from the config
> 
> check only if the subdir is the same and if it is a cephfs
> this way, it still returns true if someone changes the config
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/Storage/CephFSPlugin.pm | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] ceph: init: only handle keyring if auth is cephx

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 9:45 AM, Dominik Csapak wrote:
> if auth is 'none' there is no client keyring, so do not generate it and
> do not write it into the config
> 
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/API2/Ceph.pm | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH installer] abort installation if memory is less than 1GB

2019-07-03 Thread Oguz Bektas
hi,

On Wed, Jul 03, 2019 at 10:22:06AM +0200, Thomas Lamprecht wrote:
> On 7/2/19 3:17 PM, Oguz Bektas wrote:
> > show a message on screen about memory requirement, then die and abort
> > the installation.
> > 
> > Signed-off-by: Oguz Bektas 
> > ---
> >  proxinstall | 5 +
> >  1 file changed, 5 insertions(+)
> > 
> > diff --git a/proxinstall b/proxinstall
> > index e6a29b3..f8dd1d6 100755
> > --- a/proxinstall
> > +++ b/proxinstall
> > @@ -3234,6 +3234,11 @@ sub create_intro_view {
> >  
> >  cleanup_view();
> >  
> > +if (int($total_memory/1024) < 1) {
> 
> not that it matters much but:
> > int($total_memory) < 1024
> is:
> * easier to read
> * most of the time faster, (simple compare vs division + compare,
makes sense, alright.
> 
> > +   display_error("you need at least 1GB memory to install Proxmox\n");
> > +   die "not enough memory";
> 
> I do not want to die here, if one wants to continue, why not (e.g.,
> the limit is not exactly 1024 but rather somewhere beteween 850-900
> MB, and also then the error is IMO not reasonable, proxinstall +
> gtk-webkit + base system need ~ 300 MB memory, and that the page
> cache flushes result in memory almost OOM-like errors seems wrong.)
> 
> I mean, the check is still OK, as this is our min. system requirement,
> and it does normally not make sense to install PVE/PMG to anything with
> less memory, besides from testing :)
the reason i added the die is, it'll fail at the last step during
unsquashfs if there's not enough memory (gets oom killed). this is kind
of annoying and you end up with an unbootable system on top, so it's a
waste of time. why wouldn't we prevent that?
> 
> With followups minding above: applied
> 
> > +}
> > +
> >  if ($setup->{product} eq 'pve') {
> > eval {
> > my $cpuinfo = file_get_contents('/proc/cpuinfo');
> > 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH installer] align lines in summary screen

2019-07-03 Thread Thomas Lamprecht
On 7/2/19 12:03 PM, Oguz Bektas wrote:
> Signed-off-by: Oguz Bektas 
> ---
>  html-common/ack_template.htm | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/html-common/ack_template.htm b/html-common/ack_template.htm
> index 2f7cf32..aefe855 100644
> --- a/html-common/ack_template.htm
> +++ b/html-common/ack_template.htm
> @@ -31,9 +31,9 @@
>  
>
>
> -Please verify the displayed informations.
> -Afterwards press the Install button. The installer will
> -begin to partition your drive and extract the required files.
> +Please verify the displayed informations.
> +Afterwards press the Install button.
> +The installer will begin to partition your drive and extract the 
> required files.
>
>  
>  
> 

applied, with slight reword followup we talked off-list about, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH installer] abort installation if memory is less than 1GB

2019-07-03 Thread Thomas Lamprecht
On 7/2/19 3:17 PM, Oguz Bektas wrote:
> show a message on screen about memory requirement, then die and abort
> the installation.
> 
> Signed-off-by: Oguz Bektas 
> ---
>  proxinstall | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/proxinstall b/proxinstall
> index e6a29b3..f8dd1d6 100755
> --- a/proxinstall
> +++ b/proxinstall
> @@ -3234,6 +3234,11 @@ sub create_intro_view {
>  
>  cleanup_view();
>  
> +if (int($total_memory/1024) < 1) {

not that it matters much but:
> int($total_memory) < 1024
is:
* easier to read
* most of the time faster, (simple compare vs division + compare,

> + display_error("you need at least 1GB memory to install Proxmox\n");
> + die "not enough memory";

I do not want to die here, if one wants to continue, why not (e.g.,
the limit is not exactly 1024 but rather somewhere beteween 850-900
MB, and also then the error is IMO not reasonable, proxinstall +
gtk-webkit + base system need ~ 300 MB memory, and that the page
cache flushes result in memory almost OOM-like errors seems wrong.)

I mean, the check is still OK, as this is our min. system requirement,
and it does normally not make sense to install PVE/PMG to anything with
less memory, besides from testing :)

With followups minding above: applied

> +}
> +
>  if ($setup->{product} eq 'pve') {
>   eval {
>   my $cpuinfo = file_get_contents('/proc/cpuinfo');
> 


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH installer 0/2] small improvements for installs on zfs

2019-07-03 Thread Thomas Lamprecht
On 7/2/19 4:44 PM, Stoiko Ivanov wrote:
> while testing the latest changes to the installer a few low-hanging fruit were
> noticed:
> 
> * sometimes installations on zfs failed to boot when the disks contained 
> labels
>   from old zpools (e.g. a previous installation with raidZ2 and afterwards a
>   raid0) since the zpool import in initrd saw two pools called rpool.
> * with the recent changes to the way devtmpfs is handled it's now possible
>   to create the zpool with the /dev/disk/by-id paths (as recommended)
> 
> Since none of the patches are fundamental to the working of the installer I
> don't see a hurry to get them in (but did not want to forget sending them next
> month ;)
> 

applied, the labelclear was clearly (heh) missing.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH installer] abort installation if memory is less than 1GB

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 12:06 PM, Oguz Bektas wrote:
> On Wed, Jul 03, 2019 at 11:49:00AM +0200, Thomas Lamprecht wrote:
>> On 7/3/19 11:50 AM, Oguz Bektas wrote:
 I do not want to die here, if one wants to continue, why not (e.g.,
 the limit is not exactly 1024 but rather somewhere beteween 850-900
 MB, and also then the error is IMO not reasonable, proxinstall +
 gtk-webkit + base system need ~ 300 MB memory, and that the page
 cache flushes result in memory almost OOM-like errors seems wrong.)

 I mean, the check is still OK, as this is our min. system requirement,
 and it does normally not make sense to install PVE/PMG to anything with
 less memory, besides from testing :)
>>> the reason i added the die is, it'll fail at the last step during
>>> unsquashfs if there's not enough memory (gets oom killed). this is kind
>>> of annoying and you end up with an unbootable system on top, so it's a
>>> waste of time. why wouldn't we prevent that?
>>
>> as I already stated: 1024 is not the hardlimit, less works and we may even
>> be able to work with much less if I have some time to look at this in the
>> future. It'll be always good to remind people about less memory than the
>> minimal system requirements state, though. We just warn here, if it still
>> works then good, if not the user was warned, I do not see the problem?
> it's not a "problem" per se, it's just that i'd like to avoid wasting
> time doing an entire install just to have it get oom-killed in the last
> step which is really frustrating... i guess a warning is fine for now
> until we can improve the memory usage or find a better solution in the
> future.
> 

again what's the issue, you get the warning, if you read it you get reminded
that you assigned too little memory to a _test vm_ (as this normally _only_
happens there), _nobody_ forces you to continue the installation there, if
you do not like to waste time then stop right there, and re-start with more
memory, IMO, you do not win anything with die here, just restrict people who
explicitly want to test something. 99.999% of all user won't run into this
for production systems, it'll always just be on testing systems.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH storage] fix missing osd info for osd 0

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 8:43 AM, Dominik Csapak wrote:
> 0 is falsy, we have to check for definedness
> also adapt the tests so we test for this
> 
> Signed-off-by: Dominik Csapak 
> ---

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH installer] abort installation if memory is less than 1GB

2019-07-03 Thread Oguz Bektas
On Wed, Jul 03, 2019 at 11:49:00AM +0200, Thomas Lamprecht wrote:
> On 7/3/19 11:50 AM, Oguz Bektas wrote:
> >> I do not want to die here, if one wants to continue, why not (e.g.,
> >> the limit is not exactly 1024 but rather somewhere beteween 850-900
> >> MB, and also then the error is IMO not reasonable, proxinstall +
> >> gtk-webkit + base system need ~ 300 MB memory, and that the page
> >> cache flushes result in memory almost OOM-like errors seems wrong.)
> >>
> >> I mean, the check is still OK, as this is our min. system requirement,
> >> and it does normally not make sense to install PVE/PMG to anything with
> >> less memory, besides from testing :)
> > the reason i added the die is, it'll fail at the last step during
> > unsquashfs if there's not enough memory (gets oom killed). this is kind
> > of annoying and you end up with an unbootable system on top, so it's a
> > waste of time. why wouldn't we prevent that?
> 
> as I already stated: 1024 is not the hardlimit, less works and we may even
> be able to work with much less if I have some time to look at this in the
> future. It'll be always good to remind people about less memory than the
> minimal system requirements state, though. We just warn here, if it still
> works then good, if not the user was warned, I do not see the problem?
it's not a "problem" per se, it's just that i'd like to avoid wasting
time doing an entire install just to have it get oom-killed in the last
step which is really frustrating... i guess a warning is fine for now
until we can improve the memory usage or find a better solution in the
future.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] cephfs: make is_mounted check less strict

2019-07-03 Thread Dominik Csapak
checking '$server:$subdir' is too strict to work in all cirumcstances,
e.g. adding/removing a monitor would mean that it is not the same
anymore, same if one is adding/removing the ports from the config

check only if the subdir is the same and if it is a cephfs
this way, it still returns true if someone changes the config

Signed-off-by: Dominik Csapak 
---
 PVE/Storage/CephFSPlugin.pm | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
index 53491ed..c18f8c9 100644
--- a/PVE/Storage/CephFSPlugin.pm
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -20,16 +20,14 @@ sub cephfs_is_mounted {
 
 my $cmd_option = PVE::CephConfig::ceph_connect_option($scfg, $storeid);
 my $configfile = $cmd_option->{ceph_conf};
-my $server = $cmd_option->{mon_host} // 
PVE::CephConfig::get_monaddr_list($configfile);
 
 my $subdir = $scfg->{subdir} // '/';
 my $mountpoint = $scfg->{path};
-my $source = "$server:$subdir";
 
 $mountdata = PVE::ProcFSTools::parse_proc_mounts() if !$mountdata;
 return $mountpoint if grep {
$_->[2] =~ m#^ceph|fuse\.ceph-fuse# &&
-   $_->[0] =~ m#^\Q$source\E|ceph-fuse$# &&
+   $_->[0] =~ m#\Q:$subdir\E$|^ceph-fuse$# &&
$_->[1] eq $mountpoint
 } @$mountdata;
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] applied: [PATCH installer] abort installation if memory is less than 1GB

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 11:50 AM, Oguz Bektas wrote:
>> I do not want to die here, if one wants to continue, why not (e.g.,
>> the limit is not exactly 1024 but rather somewhere beteween 850-900
>> MB, and also then the error is IMO not reasonable, proxinstall +
>> gtk-webkit + base system need ~ 300 MB memory, and that the page
>> cache flushes result in memory almost OOM-like errors seems wrong.)
>>
>> I mean, the check is still OK, as this is our min. system requirement,
>> and it does normally not make sense to install PVE/PMG to anything with
>> less memory, besides from testing :)
> the reason i added the die is, it'll fail at the last step during
> unsquashfs if there's not enough memory (gets oom killed). this is kind
> of annoying and you end up with an unbootable system on top, so it's a
> waste of time. why wouldn't we prevent that?

as I already stated: 1024 is not the hardlimit, less works and we may even
be able to work with much less if I have some time to look at this in the
future. It'll be always good to remind people about less memory than the
minimal system requirements state, though. We just warn here, if it still
works then good, if not the user was warned, I do not see the problem?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH storage] fix missing osd info for osd 0

2019-07-03 Thread Dominik Csapak
0 is falsy, we have to check for definedness
also adapt the tests so we test for this

Signed-off-by: Dominik Csapak 
---
 PVE/Diskmanage.pm | 2 +-
 test/disk_tests/usages/disklist_expected.json | 2 +-
 test/disk_tests/usages/lvs| 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 41158f4..f446269 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -602,7 +602,7 @@ sub get_disks {
$journal_count += $ceph_volume->{journal} // 0;
$db_count += $ceph_volume->{db} // 0;
$wal_count += $ceph_volume->{wal} // 0;
-   if ($ceph_volume->{osdid}) {
+   if (defined($ceph_volume->{osdid})) {
$osdid = $ceph_volume->{osdid};
$bluestore = 1 if $ceph_volume->{bluestore};
}
diff --git a/test/disk_tests/usages/disklist_expected.json 
b/test/disk_tests/usages/disklist_expected.json
index 4f9f5cc..9829339 100644
--- a/test/disk_tests/usages/disklist_expected.json
+++ b/test/disk_tests/usages/disklist_expected.json
@@ -151,6 +151,6 @@
"rpm" : 0,
"bluestore": 0,
"type" : "hdd",
-   "osdid" : 2
+   "osdid" : 0
 }
 }
diff --git a/test/disk_tests/usages/lvs b/test/disk_tests/usages/lvs
index b3fad43..393dcd3 100644
--- a/test/disk_tests/usages/lvs
+++ b/test/disk_tests/usages/lvs
@@ -1,4 +1,4 @@
 /dev/sdg(0);osd-block-01234;ceph.osd_id=1
 /dev/sdh(0);osd-journal-01234;ceph.osd_id=1
 /dev/sdi(0);osd-db-01234;ceph.osd_id=1
-/dev/sdj(0);osd-data-01234;ceph.osd_id=2
+/dev/sdj(0);osd-data-01234;ceph.osd_id=0
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] ceph: init: only handle keyring if auth is cephx

2019-07-03 Thread Dominik Csapak
if auth is 'none' there is no client keyring, so do not generate it and
do not write it into the config

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph.pm | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 1ce74378..1036e24c 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -331,7 +331,9 @@ __PACKAGE__->register_method ({
#'osd pool default pgp num' => $pg_num,
}
 
-   $cfg->{client}->{keyring} = '/etc/pve/priv/$cluster.$name.keyring';
+   if ($auth eq 'cephx') {
+   $cfg->{client}->{keyring} = 
'/etc/pve/priv/$cluster.$name.keyring';
+   }
 
if ($param->{pg_bits}) {
$cfg->{global}->{'osd pg bits'} = $param->{pg_bits};
@@ -349,7 +351,9 @@ __PACKAGE__->register_method ({
 
cfs_write_file('ceph.conf', $cfg);
 
-   PVE::Ceph::Tools::get_or_create_admin_keyring();
+   if ($auth eq 'cephx') {
+   PVE::Ceph::Tools::get_or_create_admin_keyring();
+   }
PVE::Ceph::Tools::setup_pve_symlinks();
});
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager] api: ceph: automatically create manager after the first monitor

2019-07-03 Thread Tim Marx
Signed-off-by: Tim Marx 
---
 PVE/API2/Ceph/MON.pm | 9 +
 1 file changed, 9 insertions(+)

diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index e8963264..4090612d 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -16,6 +16,7 @@ use PVE::RESTHandler;
 use PVE::RPCEnvironment;
 use PVE::Tools qw(run_command file_set_contents);
 use PVE::CephConfig;
+use PVE::API2::Ceph::MGR;
 
 use base qw(PVE::RESTHandler);
 
@@ -282,6 +283,14 @@ __PACKAGE__->register_method ({
PVE::Ceph::Services::broadcast_ceph_services();
});
die $@ if $@;
+   # automatically create manager after the first monitor is created
+   if (scalar(keys %$monhash) eq 0) {
+
+   PVE::API2::Ceph::MGR->createmgr({
+   node => $param->{node},
+   id => $param->{node}
+   })
+   }
};
 
return $rpcenv->fork_worker('cephcreatemon', $monsection, $authuser, 
$worker);
-- 
2.20.1

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH ceph] change ceph mgr plugin dependencies

2019-07-03 Thread Dominik Csapak
only suggest not recommend, remove scipy,numpy and sklearn
dependencies(they are not used)

Signed-off-by: Dominik Csapak 
---
 ...mgr-dependencies-to-more-sane-values.patch | 46 +++
 patches/series|  1 +
 2 files changed, 47 insertions(+)
 create mode 100644 
patches/0011-change-ceph-mgr-dependencies-to-more-sane-values.patch

diff --git 
a/patches/0011-change-ceph-mgr-dependencies-to-more-sane-values.patch 
b/patches/0011-change-ceph-mgr-dependencies-to-more-sane-values.patch
new file mode 100644
index 0..4974357cc
--- /dev/null
+++ b/patches/0011-change-ceph-mgr-dependencies-to-more-sane-values.patch
@@ -0,0 +1,46 @@
+From  Mon Sep 17 00:00:00 2001
+From: Dominik Csapak 
+Date: Wed, 3 Jul 2019 13:44:31 +0200
+Subject: [PATCH] change ceph mgr dependencies to more sane values
+
+Only suggest the mgr plugins and remove numpy,scipy and sklearn from
+the diskprediction_local dependencies, as they are not using them
+
+Signed-off-by: Dominik Csapak 
+---
+ debian/control | 15 ++-
+ 1 file changed, 6 insertions(+), 9 deletions(-)
+
+diff --git a/debian/control b/debian/control
+index e7a01c6ff8..6fd8060c57 100644
+--- a/debian/control
 b/debian/control
+@@ -188,12 +188,12 @@ Depends: ceph-base (= ${binary:Version}),
+  ${misc:Depends},
+  ${python:Depends},
+  ${shlibs:Depends},
+-Recommends: ceph-mgr-dashboard,
+-ceph-mgr-diskprediction-local,
+-ceph-mgr-diskprediction-cloud,
+-ceph-mgr-rook,
+-ceph-mgr-ssh
+-Suggests: python-influxdb
++Suggests: ceph-mgr-dashboard,
++  ceph-mgr-diskprediction-local,
++  ceph-mgr-diskprediction-cloud,
++  ceph-mgr-rook,
++  ceph-mgr-ssh,
++  python-influxdb
+ Replaces: ceph (<< 0.93-417),
+ Breaks: ceph (<< 0.93-417),
+ Description: manager for the ceph distributed storage system
+@@ -230,9 +230,6 @@ Description: dashboard plugin for ceph-mgr
+ Package: ceph-mgr-diskprediction-local
+ Architecture: all
+ Depends: ceph-mgr (= ${binary:Version}),
+- python-numpy,
+- python-scipy,
+- python-sklearn,
+  ${misc:Depends},
+  ${python:Depends},
+  ${shlibs:Depends},
diff --git a/patches/series b/patches/series
index 91dfc6138..3a4225433 100644
--- a/patches/series
+++ b/patches/series
@@ -5,3 +5,4 @@
 0008-ceph-volume-lvm.zap-fix-cleanup-for-db-partitions.patch
 0009-remove-legacy-pve-ceph-osd-activation-script-in-post.patch
 0010-remove-legacy-init.d-ceph-script.patch
+0011-change-ceph-mgr-dependencies-to-more-sane-values.patch
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH firewall 1/3] Check if corosync.conf exists before calling parser

2019-07-03 Thread Fabian Grünbichler
thanks, applied all three with the following follow-up(s):

diff --git a/src/PVE/Service/pve_firewall.pm b/src/PVE/Service/pve_firewall.pm
index 39ceb39..d78bcb1 100755
--- a/src/PVE/Service/pve_firewall.pm
+++ b/src/PVE/Service/pve_firewall.pm
@@ -272,14 +272,14 @@ __PACKAGE__->register_method ({
print "\naccepting corosync traffic from/to:\n";
 
PVE::Corosync::for_all_corosync_addresses($corosync_conf, undef, 
sub {
-   my ($node_name, $node_ip, $node_ipversion, $key) = @_;
+   my ($curr_node_name, $curr_node_ip, undef, $key) = @_;
 
-   if (!$corosync_node_found) {
-   $corosync_node_found = 1;
-   }
+   return if $curr_node_name eq $nodename;
+
+   $corosync_node_found = 1;
 
$key =~ m/(?:ring|link)(\d+)_addr/;
-   print " - $node_name: $node_ip (link: $1)\n";
+   print " - $curr_node_name: $curr_node_ip (link: $1)\n";
});
 
if (!$corosync_node_found) {

On Wed, Jul 03, 2019 at 02:27:33PM +0200, Stefan Reiter wrote:
> Calling cfs_read_file with no corosync.conf (i.e. on a standalone node)
> returns {} instead of undef. The previous patches assumes undef for this
> scenario. To avoid confusing checks all over the place, simply leave the
> config as undef if no file exists.
> 
> Signed-off-by: Stefan Reiter 
> ---
>  src/PVE/Firewall.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
> index 16d7301..96c45e9 100644
> --- a/src/PVE/Firewall.pm
> +++ b/src/PVE/Firewall.pm
> @@ -3519,7 +3519,8 @@ sub compile {
>   $hostfw_conf = load_hostfw_conf($cluster_conf, undef) if !$hostfw_conf;
>  
>   # cfs_update is handled by daemon or API
> - $corosync_conf = PVE::Cluster::cfs_read_file("corosync.conf") if 
> !$corosync_conf;
> + $corosync_conf = PVE::Cluster::cfs_read_file("corosync.conf")
> + if !defined($corosync_conf) && PVE::Corosync::check_conf_exists(1);
>  
>   $vmdata = read_local_vm_config();
>   $vmfw_configs = read_vm_firewall_configs($cluster_conf, $vmdata, undef);
> -- 
> 2.20.1
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/4] 5to6: invert check for noout for nautilus

2019-07-03 Thread Fabian Grünbichler
mainly because it looks strange to get a warning after the upgrade is
finished and noout has been removed again

Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve5to6.pm | 22 +++---
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index 23280b97..6c20ad9c 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -349,8 +349,8 @@ sub check_ceph {
 log_info("getting Ceph status/health information..");
 my $ceph_status = eval { PVE::API2::Ceph->status({ node => $nodename }); };
 my $osd_flags = eval { PVE::API2::Ceph->get_flags({ node => $nodename }); 
};
-my $noout;
-$noout = $osd_flags =~ m/noout/ if $osd_flags;
+my $noout_wanted = 1;
+my $noout = $osd_flags =~ m/noout/ if $osd_flags;
 
 if (!$ceph_status || !$ceph_status->{health}) {
log_fail("unable to determine Ceph status!");
@@ -378,11 +378,6 @@ sub check_ceph {
} else {
log_fail("missing 'recovery_deletes' and/or 'purged_snapdirs' 
flag, scrub of all PGs required before upgrading to Nautilus!");
}
-   if ($noout) {
-   log_pass("noout flag set to prevent rebalancing during 
cluster-wide upgrades.");
-   }  else {
-   log_warn("noout flag not set - recommended to prevent 
rebalancing during upgrades.");
-   }
}
 };
 
@@ -418,11 +413,24 @@ sub check_ceph {
log_warn("unable to determine overall Ceph daemon versions!");
} elsif (keys %$overall_versions == 1) {
log_pass("single running overall version detected for all Ceph 
daemon types.");
+   if ((keys %$overall_versions)[0] =~ /^ceph version 14\./) {
+   $noout_wanted = 0;
+   }
} else {
log_warn("overall version mismatch detected, check 'ceph versions' 
output for details!");
}
 }
 
+if ($noout) {
+   if ($noout_wanted) {
+   log_pass("noout flag set to prevent rebalancing during cluster-wide 
upgrades.");
+   } else {
+   log_warn("noout flag set, Ceph cluster upgrade seems finished.");
+   }
+} elsif ($noout_wanted) {
+   log_warn("noout flag not set - recommended to prevent rebalancing 
during upgrades.");
+}
+
 my $local_ceph_ver = PVE::Ceph::Tools::get_local_version(1);
 if (defined($local_ceph_ver)) {
if ($local_ceph_ver == 14) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/4] 5to6: improve some log messages

2019-07-03 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve5to6.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index e9373288..215f7430 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -467,12 +467,13 @@ sub check_misc {
log_pass("no running guest detected.")
 }
 
-log_info("Checking if we the local nodes address is resolvable and 
configured..");
+log_info("Checking if the local node's hostname is resolvable..");
 my $host = PVE::INotify::nodename();
 my $local_ip = eval { PVE::Network::get_ip_from_hostname($host) };
 if ($@) {
log_warn("Failed to resolve hostname '$host' to IP - $@");
 } else {
+   log_info("Checking if resolved IP is configured on local node..");
my $cidr = Net::IP::ip_is_ipv6($local_ip) ? "$local_ip/128" : 
"$local_ip/32";
my $configured_ips = PVE::Network::get_local_ip_from_cidr($cidr);
my $ip_count = scalar(@$configured_ips);
@@ -482,7 +483,7 @@ sub check_misc {
} elsif ($ip_count > 1) {
log_warn("Resolved node IP '$local_ip' active on multiple 
($ip_count) interfaces!");
} else {
-   log_pass("Could resolved local nodename '$host' to active IP 
'$local_ip'");
+   log_pass("Resolved node IP '$local_ip' configured and active on 
single interface.");
}
 }
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/4] 5to6: reuse $nodename

2019-07-03 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
 PVE/CLI/pve5to6.pm | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index 215f7430..23280b97 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -468,10 +468,9 @@ sub check_misc {
 }
 
 log_info("Checking if the local node's hostname is resolvable..");
-my $host = PVE::INotify::nodename();
-my $local_ip = eval { PVE::Network::get_ip_from_hostname($host) };
+my $local_ip = eval { PVE::Network::get_ip_from_hostname($nodename) };
 if ($@) {
-   log_warn("Failed to resolve hostname '$host' to IP - $@");
+   log_warn("Failed to resolve hostname '$nodename' to IP - $@");
 } else {
log_info("Checking if resolved IP is configured on local node..");
my $cidr = Net::IP::ip_is_ipv6($local_ip) ? "$local_ip/128" : 
"$local_ip/32";
@@ -479,7 +478,7 @@ sub check_misc {
my $ip_count = scalar(@$configured_ips);
 
if ($ip_count <= 0) {
-   log_fail("Resolved node IP '$local_ip' not configured or active for 
'$host'");
+   log_fail("Resolved node IP '$local_ip' not configured or active for 
'$nodename'");
} elsif ($ip_count > 1) {
log_warn("Resolved node IP '$local_ip' active on multiple 
($ip_count) interfaces!");
} else {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/4] 5to6: add check for configured Sheepdog storages

2019-07-03 Thread Fabian Grünbichler
Signed-off-by: Fabian Grünbichler 
---
this is mainly relevant for stable-5, since master already drops them from the 
storage.cfg

 PVE/CLI/pve5to6.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
index c167ebca..e9373288 100644
--- a/PVE/CLI/pve5to6.pm
+++ b/PVE/CLI/pve5to6.pm
@@ -234,7 +234,9 @@ sub check_storage_health {
 foreach my $storeid (keys %$info) {
my $d = $info->{$storeid};
if ($d->{enabled}) {
-   if ($d->{active}) {
+   if ($d->{type} eq 'sheepdog') {
+   log_fail("storage '$storeid' of type 'sheepdog' is enabled - 
Sheepdog is no longer supported in PVE 6.x!")
+   } elsif ($d->{active}) {
log_pass("storage '$storeid' enabled and active.");
} else {
log_warn("storage '$storeid' enabled but not active!");
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 1/4] 5to6: add check for configured Sheepdog storages

2019-07-03 Thread Fabian Grünbichler
On Wed, Jul 03, 2019 at 03:41:54PM +0200, Thomas Lamprecht wrote:
> On 7/3/19 3:28 PM, Fabian Grünbichler wrote:
> > Signed-off-by: Fabian Grünbichler 
> > ---
> > this is mainly relevant for stable-5, since master already drops them from 
> > the storage.cfg
> > 
> >  PVE/CLI/pve5to6.pm | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
> > index c167ebca..e9373288 100644
> > --- a/PVE/CLI/pve5to6.pm
> > +++ b/PVE/CLI/pve5to6.pm
> > @@ -234,7 +234,9 @@ sub check_storage_health {
> >  foreach my $storeid (keys %$info) {
> > my $d = $info->{$storeid};
> > if ($d->{enabled}) {
> > -   if ($d->{active}) {
> > +   if ($d->{type} eq 'sheepdog') {
> > +   log_fail("storage '$storeid' of type 'sheepdog' is enabled - 
> > Sheepdog is no longer supported in PVE 6.x!")
> 
> it was not really supported with 5 either, so:
> 
> "- experimental sheepdog support dropped in PVE 6"?

also okay for me :)

> > +   } elsif ($d->{active}) {
> > log_pass("storage '$storeid' enabled and active.");
> > } else {
> > log_warn("storage '$storeid' enabled but not active!");
> > 
> 
> 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] api: ceph: automatically create manager after the first monitor

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 2:40 PM, Tim Marx wrote:
> Signed-off-by: Tim Marx 
> ---
>  PVE/API2/Ceph/MON.pm | 9 +
>  1 file changed, 9 insertions(+)
> 
> diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
> index e8963264..4090612d 100644
> --- a/PVE/API2/Ceph/MON.pm
> +++ b/PVE/API2/Ceph/MON.pm
> @@ -16,6 +16,7 @@ use PVE::RESTHandler;
>  use PVE::RPCEnvironment;
>  use PVE::Tools qw(run_command file_set_contents);
>  use PVE::CephConfig;
> +use PVE::API2::Ceph::MGR;
>  
>  use base qw(PVE::RESTHandler);
>  
> @@ -282,6 +283,14 @@ __PACKAGE__->register_method ({
>   PVE::Ceph::Services::broadcast_ceph_services();
>   });
>   die $@ if $@;
> + # automatically create manager after the first monitor is created
> + if (scalar(keys %$monhash) eq 0) {
> +
> + PVE::API2::Ceph::MGR->createmgr({
> + node => $param->{node},
> + id => $param->{node}
> + })
> + }
>   };
>  
>   return $rpcenv->fork_worker('cephcreatemon', $monsection, $authuser, 
> $worker);
> 

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 3/3] Formatting fixes (trailing whitespace and indentation)

2019-07-03 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---

There were so many in this file it actually bothered me :)


 src/PVE/Service/pve_firewall.pm | 42 -
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/src/PVE/Service/pve_firewall.pm b/src/PVE/Service/pve_firewall.pm
index 3c1254b..39ceb39 100755
--- a/src/PVE/Service/pve_firewall.pm
+++ b/src/PVE/Service/pve_firewall.pm
@@ -30,14 +30,14 @@ my $nodename = PVE::INotify::nodename();
 sub init {
 
 PVE::Cluster::cfs_update();
-
+
 PVE::Firewall::init();
 }
 
 my $restart_request = 0;
 my $next_update = 0;
 
-my $cycle = 0; 
+my $cycle = 0;
 my $updatetime = 10;
 
 my $initial_memory_usage;
@@ -49,7 +49,7 @@ sub shutdown {
 
 # wait for children
 1 while (waitpid(-1, POSIX::WNOHANG()) > 0);
-   
+
 syslog('info' , "clear firewall rules");
 
 eval { PVE::Firewall::remove_pvefw_chains(); };
@@ -79,7 +79,7 @@ sub run {
PVE::Firewall::update();
};
my $err = $@;
-   
+
if ($err) {
syslog('err', "status update error: $err");
}
@@ -93,7 +93,7 @@ sub run {
$cycle++;
 
my $mem = PVE::ProcFSTools::read_memory_usage();
-   
+
if (!defined($initial_memory_usage) || ($cycle < 10)) {
$initial_memory_usage = $mem->{resident};
} else {
@@ -106,10 +106,10 @@ sub run {
}
 
my $wcount = 0;
-   while ((time() < $next_update) && 
+   while ((time() < $next_update) &&
   ($wcount < $updatetime) && # protect against time wrap
   !$restart_request) { $wcount++; sleep (1); };
-   
+
$self->restart_daemon() if $restart_request;
 }
 }
@@ -126,10 +126,10 @@ __PACKAGE__->register_method ({
 method => 'GET',
 description => "Get firewall status.",
 parameters => {
-   additionalProperties => 0,
+   additionalProperties => 0,
properties => {},
 },
-returns => { 
+returns => {
type => 'object',
additionalProperties => 0,
properties => {
@@ -165,7 +165,7 @@ __PACKAGE__->register_method ({
$res->{enable} = $cluster_conf->{options}->{enable} ? 1 : 0;
 
if ($status eq 'running') {
-   
+
my ($ruleset, $ipset_ruleset, $rulesetv6, $ebtables_ruleset) = 
PVE::Firewall::compile($cluster_conf, undef, undef);
 
PVE::Firewall::set_verbose(0); # do not show iptables details
@@ -189,7 +189,7 @@ __PACKAGE__->register_method ({
 method => 'GET',
 description => "Compile and print firewall rules. This is useful for 
testing.",
 parameters => {
-   additionalProperties => 0,
+   additionalProperties => 0,
properties => {},
 },
 returns => { type => 'null' },
@@ -240,7 +240,7 @@ __PACKAGE__->register_method ({
 method => 'GET',
 description => "Print information about local network.",
 parameters => {
-   additionalProperties => 0,
+   additionalProperties => 0,
properties => {},
 },
 returns => { type => 'null' },
@@ -256,7 +256,7 @@ __PACKAGE__->register_method ({
print "local IP address: $ip\n";
 
my $cluster_conf = PVE::Firewall::load_clusterfw_conf();
-   
+
my $localnet = PVE::Firewall::local_network() || '127.0.0.0/8';
print "network auto detect: $localnet\n";
if ($cluster_conf->{aliases}->{local_network}) {
@@ -296,7 +296,7 @@ __PACKAGE__->register_method ({
 method => 'GET',
 description => "Simulate firewall rules. This does not simulate kernel 
'routing' table. Instead, this simply assumes that routing from source zone to 
destination zone is possible.",
 parameters => {
-   additionalProperties => 0,
+   additionalProperties => 0,
properties => {
verbose => {
description => "Verbose output.",
@@ -362,7 +362,7 @@ __PACKAGE__->register_method ({
my ($ruleset, $ipset_ruleset, $rulesetv6, $ebtables_ruleset) = 
PVE::Firewall::compile();
 
PVE::FirewallSimulator::debug();
-   
+
my $host_ip = PVE::Cluster::remote_node_ip($nodename);
 
PVE::FirewallSimulator::reset_trace();
@@ -380,11 +380,11 @@ __PACKAGE__->register_method ({
 
if (!defined($test->{to})) {
$test->{to} = 'host';
-   PVE::FirewallSimulator::add_trace("Set Zone: to => 
'$test->{to}'\n"); 
-   } 
+   PVE::FirewallSimulator::add_trace("Set Zone: to => 
'$test->{to}'\n");
+   }
if (!defined($test->{from})) {
$test->{from} = 'outside',
-   PVE::FirewallSimulator::add_trace("Set Zone: from => 
'$test->{from}'\n"); 
+   PVE::FirewallSimulator::add_trace("Set Zone: from => 
'$test->{from}'\n");
}
 
my $vmdata = PVE::Firewall::read_local_vm_config();
@@ -397,9 +397,9 @@ __PACKAGE__->register_method ({
 
$test->{action} = 'QUERY';
 
-   my $res = 

[pve-devel] [PATCH firewall 1/3] Check if corosync.conf exists before calling parser

2019-07-03 Thread Stefan Reiter
Calling cfs_read_file with no corosync.conf (i.e. on a standalone node)
returns {} instead of undef. The previous patches assumes undef for this
scenario. To avoid confusing checks all over the place, simply leave the
config as undef if no file exists.

Signed-off-by: Stefan Reiter 
---
 src/PVE/Firewall.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 16d7301..96c45e9 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -3519,7 +3519,8 @@ sub compile {
$hostfw_conf = load_hostfw_conf($cluster_conf, undef) if !$hostfw_conf;
 
# cfs_update is handled by daemon or API
-   $corosync_conf = PVE::Cluster::cfs_read_file("corosync.conf") if 
!$corosync_conf;
+   $corosync_conf = PVE::Cluster::cfs_read_file("corosync.conf")
+   if !defined($corosync_conf) && PVE::Corosync::check_conf_exists(1);
 
$vmdata = read_local_vm_config();
$vmfw_configs = read_vm_firewall_configs($cluster_conf, $vmdata, undef);
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH firewall 2/3] Display corosync rule info on localnet call

2019-07-03 Thread Stefan Reiter
If no corosync.conf exists (i.e. a standalone node), the output is left
the same.

Signed-off-by: Stefan Reiter 
---

Is there a project standard regarding list output formatting?
I personally think it looks good and readable, but consistency with other
CLI tools would of course be preferrable.


 src/PVE/Service/pve_firewall.pm | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/src/PVE/Service/pve_firewall.pm b/src/PVE/Service/pve_firewall.pm
index d8e42ec..3c1254b 100755
--- a/src/PVE/Service/pve_firewall.pm
+++ b/src/PVE/Service/pve_firewall.pm
@@ -10,6 +10,7 @@ use PVE::Tools qw(dir_glob_foreach file_read_firstline);
 use PVE::ProcFSTools;
 use PVE::INotify;
 use PVE::Cluster qw(cfs_read_file);
+use PVE::Corosync;
 use PVE::RPCEnvironment;
 use PVE::CLIHandler;
 use PVE::Firewall;
@@ -264,6 +265,28 @@ __PACKAGE__->register_method ({
print "using detected local_network: $localnet\n";
}
 
+   if (PVE::Corosync::check_conf_exists(1)) {
+   my $corosync_conf = PVE::Cluster::cfs_read_file("corosync.conf");
+   my $corosync_node_found = 0;
+
+   print "\naccepting corosync traffic from/to:\n";
+
+   PVE::Corosync::for_all_corosync_addresses($corosync_conf, undef, 
sub {
+   my ($node_name, $node_ip, $node_ipversion, $key) = @_;
+
+   if (!$corosync_node_found) {
+   $corosync_node_found = 1;
+   }
+
+   $key =~ m/(?:ring|link)(\d+)_addr/;
+   print " - $node_name: $node_ip (link: $1)\n";
+   });
+
+   if (!$corosync_node_found) {
+   print " - no nodes found\n";
+   }
+   }
+
return undef;
 }});
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 1/4] 5to6: add check for configured Sheepdog storages

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 3:28 PM, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler 
> ---
> this is mainly relevant for stable-5, since master already drops them from 
> the storage.cfg
> 
>  PVE/CLI/pve5to6.pm | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
> index c167ebca..e9373288 100644
> --- a/PVE/CLI/pve5to6.pm
> +++ b/PVE/CLI/pve5to6.pm
> @@ -234,7 +234,9 @@ sub check_storage_health {
>  foreach my $storeid (keys %$info) {
>   my $d = $info->{$storeid};
>   if ($d->{enabled}) {
> - if ($d->{active}) {
> + if ($d->{type} eq 'sheepdog') {
> + log_fail("storage '$storeid' of type 'sheepdog' is enabled - 
> Sheepdog is no longer supported in PVE 6.x!")

it was not really supported with 5 either, so:

"- experimental sheepdog support dropped in PVE 6"?

> + } elsif ($d->{active}) {
>   log_pass("storage '$storeid' enabled and active.");
>   } else {
>   log_warn("storage '$storeid' enabled but not active!");
> 



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 4/4] ceph: mon create: add known monitor ips to mon_host if it is empty

2019-07-03 Thread Dominik Csapak
this fixes an issue where only one monitor is in mon_host, which is
offline, prevents a client connection

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph/MON.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
index df73450a..b59d2e59 100644
--- a/PVE/API2/Ceph/MON.pm
+++ b/PVE/API2/Ceph/MON.pm
@@ -265,6 +265,12 @@ __PACKAGE__->register_method ({
 
# update ceph.conf
my $monhost = $cfg->{global}->{mon_host} // "";
+   # add all known monitor ips to mon_host if it does not exist
+   if (!defined($cfg->{global}->{mon_host})) {
+   for my $mon (sort keys %$monhash) {
+   $monhost .= " " . $monhash->{$mon}->{addr};
+   }
+   }
$monhost .= " $ip";
$cfg->{global}->{mon_host} = $monhost;
if (!defined($cfg->{global}->{public_network})) {
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 2/4] ceph: osd: use get-or-create to create a bootstrap-osd key on demand

2019-07-03 Thread Dominik Csapak
if for some reason the cluster does not have this key, generate it

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph/OSD.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 064f6b03..85197107 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -313,7 +313,14 @@ __PACKAGE__->register_method ({
my $ceph_bootstrap_osd_keyring = 
PVE::Ceph::Tools::get_config('ceph_bootstrap_osd_keyring');
 
if (! -f $ceph_bootstrap_osd_keyring && 
$ceph_conf->{global}->{auth_client_required} eq 'cephx') {
-   my $bindata = $rados->mon_command({ prefix => 'auth get', entity => 
'client.bootstrap-osd', format => 'plain' });
+   my $bindata = $rados->mon_command({
+   prefix => 'auth get-or-create',
+   entity => 'client.bootstrap-osd',
+   caps => [
+   'mon' => 'allow profile bootstrap-osd'
+   ],
+   format => 'plain',
+   });
file_set_contents($ceph_bootstrap_osd_keyring, $bindata);
};
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 1/4] ceph: osd create: check for auth before getting bootstrap key

2019-07-03 Thread Dominik Csapak
we do not need it if auth is 'none'

Signed-off-by: Dominik Csapak 
---
 PVE/API2/Ceph/OSD.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index 42dee361..064f6b03 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -309,9 +309,10 @@ __PACKAGE__->register_method ({
my $fsid = $monstat->{monmap}->{fsid};
 $fsid = $1 if $fsid =~ m/^([0-9a-f\-]+)$/;
 
+   my $ceph_conf = cfs_read_file('ceph.conf');
my $ceph_bootstrap_osd_keyring = 
PVE::Ceph::Tools::get_config('ceph_bootstrap_osd_keyring');
 
-   if (! -f $ceph_bootstrap_osd_keyring) {
+   if (! -f $ceph_bootstrap_osd_keyring && 
$ceph_conf->{global}->{auth_client_required} eq 'cephx') {
my $bindata = $rados->mon_command({ prefix => 'auth get', entity => 
'client.bootstrap-osd', format => 'plain' });
file_set_contents($ceph_bootstrap_osd_keyring, $bindata);
};
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH manager 3/4] ceph: services: improve addr selection

2019-07-03 Thread Dominik Csapak
we map '$type addr' to '$type_addr' anyway in the ceph.conf parser,
so this is not necessary

also use 'public_addr' if it is set

Signed-off-by: Dominik Csapak 
---
 PVE/Ceph/Services.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Ceph/Services.pm b/PVE/Ceph/Services.pm
index bcdb39ee..45eb6c3f 100644
--- a/PVE/Ceph/Services.pm
+++ b/PVE/Ceph/Services.pm
@@ -98,7 +98,7 @@ sub get_services_info {
if ($section =~ m/^$type\.(\S+)$/) {
my $id = $1;
my $service = $result->{$id};
-   my $addr = $d->{"$type addr"} // $d->{"${type}_addr"} // $d->{host};
+   my $addr = $d->{"${type}_addr"} // $d->{public_addr} // $d->{host};
$service->{name} //= $id;
$service->{addr} //= $addr;
$service->{state} //= 'unknown';
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 2/4] 5to6: improve some log messages

2019-07-03 Thread Thomas Lamprecht
On 7/3/19 3:28 PM, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler 
> ---
>  PVE/CLI/pve5to6.pm | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/CLI/pve5to6.pm b/PVE/CLI/pve5to6.pm
> index e9373288..215f7430 100644
> --- a/PVE/CLI/pve5to6.pm
> +++ b/PVE/CLI/pve5to6.pm
> @@ -467,12 +467,13 @@ sub check_misc {
>   log_pass("no running guest detected.")
>  }
>  
> -log_info("Checking if we the local nodes address is resolvable and 
> configured..");
> +log_info("Checking if the local node's hostname is resolvable..");
>  my $host = PVE::INotify::nodename();
>  my $local_ip = eval { PVE::Network::get_ip_from_hostname($host) };
>  if ($@) {
>   log_warn("Failed to resolve hostname '$host' to IP - $@");
>  } else {
> + log_info("Checking if resolved IP is configured on local node..");
>   my $cidr = Net::IP::ip_is_ipv6($local_ip) ? "$local_ip/128" : 
> "$local_ip/32";
>   my $configured_ips = PVE::Network::get_local_ip_from_cidr($cidr);
>   my $ip_count = scalar(@$configured_ips);
> @@ -482,7 +483,7 @@ sub check_misc {
>   } elsif ($ip_count > 1) {
>   log_warn("Resolved node IP '$local_ip' active on multiple 
> ($ip_count) interfaces!");
>   } else {
> - log_pass("Could resolved local nodename '$host' to active IP 
> '$local_ip'");
> + log_pass("Resolved node IP '$local_ip' configured and active on 
> single interface.");

why remove nodename?

>   }
>  }
>  
> 



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH docs] document #2247: add info about SSH fingerprints on cluster leave

2019-07-03 Thread Stefan Reiter
Signed-off-by: Stefan Reiter 
---
 pvecm.adoc | 5 +
 1 file changed, 5 insertions(+)

diff --git a/pvecm.adoc b/pvecm.adoc
index 7525bb5..d8f2341 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -309,6 +309,11 @@ cluster again, you have to
 
 * then join it, as explained in the previous section.
 
+NOTE: After removal of the node, its SSH fingerprint will still reside in the
+'known_hosts' of the other nodes. If you receive an SSH error after rejoining
+a node with the same IP or hostname, run `pvecm updatecerts` once to update
+the fingerprint on all nodes.
+
 [[pvecm_separate_node_without_reinstall]]
 Separate A Node Without Reinstalling
 
-- 
2.20.1


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel