[pve-devel] ZFS encryption

2018-04-03 Thread Andreas Steinel
Hi everyone,

are you (Proxmox staff) actively testing encrypted ZFS or are you
waiting for the upstream "activation"?

-- 
With kind regards / Mit freundlichen Grüßen

Andreas Steinel
M.Sc. Visual Computing
M.Sc. Informatik
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC kernel 0/2] pve-kernel helper scripts for patch-queue management

2018-04-03 Thread Thomas Lamprecht

Am 04/03/2018 um 01:30 PM schrieb Fabian Grünbichler:

this patch series introduces helper scripts for
- importing the exported patchqueue into a patchqueue branch inside the
   submodule
- exporting the (updated) patchqueue from the patchqueue branch inside the
   submodule
- importing a new upstream tag into the submodule, optionally rebasing the
   patchqueue

potential for future extensions:
- cherry-pick upstream commit(s) from linux-stable(-queue) or arbitrary
   trees/repos into patchqueue
- ... ? ;)

applicabale for pve-kernel-4.15 and master


After reading through the code without testing: looks mostly OK, only 
small nits.

Will give it another look and test it tomorrow, I'm off for today ;)


sample run on top of current pve-kernel-4.15 for importing Ubuntu-4.15.0-14.15,
including rebasing the queue (and dropping two patches which have been applied
upstream):

--
$ debian/scripts/import-upstream-tag submodules/ubuntu-bionic patches/kernel 
Ubuntu-4.15.0-14.15 yes
checking for tag 'Ubuntu-4.15.0-14.15'
tag not found, fetching and retrying
 From git://git.proxmox.com/git/mirror_ubuntu-bionic-kernel
  * [new tag]   Ubuntu-4.15.0-14.15 -> Ubuntu-4.15.0-14.15
tag found

automatic patchqueue rebase enabled
previous HEAD: 6dc5db97022239a3ce21df3f6a84dea5cdff1999

creating patchqeueue branch 'auto_pq/Ubuntu-4.15.0-14.15'
Switched to a new branch 'auto_pq/Ubuntu-4.15.0-14.15'
importing patches from 'patches/kernel'
Applying: Make mkcompile_h accept an alternate timestamp string
Applying: bridge: keep MAC of first assigned port
Applying: pci: Enable overrides for missing ACS capabilities (4.15)
Applying: kvm: disable default dynamic halt polling growth
Applying: ocfs2: make metadata estimation accurate and clear
Applying: ocfs2: try to reuse extent block in dealloc without meta_alloc
Applying: mm/shmem: do not wait for lock_page() in shmem_unused_huge_shrink()
Applying: mm/thp: Do not wait for lock_page() in deferred_split_scan()

rebasing patchqueue on top of 'Ubuntu-4.15.0-14.15'
First, rewinding head to replay your work on top of it...
Applying: Make mkcompile_h accept an alternate timestamp string
Applying: bridge: keep MAC of first assigned port
Applying: pci: Enable overrides for missing ACS capabilities (4.15)
Applying: kvm: disable default dynamic halt polling growth
Applying: ocfs2: make metadata estimation accurate and clear
Applying: ocfs2: try to reuse extent block in dealloc without meta_alloc

clearing old exported patchqueue
exporting patchqueue using 'git format-patch [...] Ubuntu-4.15.0-14.15..
cleaning up PQ branch 'auto_pq/Ubuntu-4.15.0-14.15'
HEAD is now at 6dc5db970222 UBUNTU: Ubuntu-4.15.0-13.14
Deleted branch auto_pq/Ubuntu-4.15.0-14.15 (was 9047121a601c).

checking out 'Ubuntu-4.15.0-14.15' in submodule

committing results
[pve-kernel-4.15 22f0ef8] update sources to Ubuntu-4.15.0-14.15
  1 file changed, 1 insertion(+), 1 deletion(-)
[pve-kernel-4.15 9a410ef] rebase patches on top of Ubuntu-4.15.0-14.15
  3 files changed, 3 insertions(+), 152 deletions(-)
  delete mode 100644 
patches/kernel/0007-mm-shmem-do-not-wait-for-lock_page-in-shmem_unused_h.patch
  delete mode 100644 
patches/kernel/0007-mm-thp-Do-not-wait-for-lock_page-in-deferred_split_s.patch

$ git log --stat --format=medium origin/pve-kernel-4.15..
commit 9a410ef6f15cb089b50212c6d73e0718447735ac
Author: Fabian Grünbichler 
Date:   Tue Apr 3 13:10:59 2018 +0200

 rebase patches on top of Ubuntu-4.15.0-14.15
 
 (generated with debian/scripts/import-upstream-tag)
 
 Signed-off-by: Fabian Grünbichler 


  ...overrides-for-missing-ACS-capabilities-4..patch |   6 +-
  ...-not-wait-for-lock_page-in-shmem_unused_h.patch | 103 -
  ...ot-wait-for-lock_page-in-deferred_split_s.patch |  46 -
  3 files changed, 3 insertions(+), 152 deletions(-)

commit 22f0ef84aa01191fe751ef00a6de1f3eb7ebade6
Author: Fabian Grünbichler 
Date:   Tue Apr 3 13:10:59 2018 +0200

 update sources to Ubuntu-4.15.0-14.15
 
 (generated with debian/scripts/import-upstream-tag)
 
 Signed-off-by: Fabian Grünbichler 


  submodules/ubuntu-bionic | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

commit c7ef647bc07ca719b1f0b2c1a05c43de56cbbebc
Author: Fabian Grünbichler 
Date:   Tue Apr 3 11:16:30 2018 +0200

 debian/scripts: add import-upstream-tag
 
 Signed-off-by: Fabian Grünbichler 


  debian/scripts/import-upstream-tag | 115 +
  1 file changed, 115 insertions(+)

commit 93fff928143b775f2ffa47840cca26e053e5cfed
Author: Fabian Grünbichler 
Date:   Tue Apr 3 11:16:06 2018 +0200

 debian/scripts: add patchqueue scripts
 
 Signed-off-by: Fabian Grünbichler 


  

Re: [pve-devel] [RFC kernel 1/2] debian/scripts: add patchqueue scripts

2018-04-03 Thread Thomas Lamprecht

Am 04/03/2018 um 01:30 PM schrieb Fabian Grünbichler:

$ import-patchqueue repo dir [branch]

imports a (previously exported) patchqueue from 'dir' into a new branch
(default 'pq') in 'repo'

$ export-patchqueue repo dir ref

exports a patchqueue from 'ref' to current HEAD of 'repo' into 'dir'

'repo' can be any checked out non-bare repository, including worktrees or 
submodules.

Signed-off-by: Fabian Grünbichler 
---
note: these are separate scripts to help with manual rebasing and/or 
cherry-picking, and are called from import-upstream-tag from the next patch

  debian/scripts/export-patchqueue | 30 ++
  debian/scripts/import-patchqueue | 28 
  2 files changed, 58 insertions(+)
  create mode 100755 debian/scripts/export-patchqueue
  create mode 100755 debian/scripts/import-patchqueue

diff --git a/debian/scripts/export-patchqueue b/debian/scripts/export-patchqueue
new file mode 100755
index 000..976d128
--- /dev/null
+++ b/debian/scripts/export-patchqueue
@@ -0,0 +1,30 @@
+#!/bin/bash
+
+set -e
+
+top=$(pwd)
+
+if [ "$#" -ne 3 ]; then
+echo "three parameters required."

What/which three parameters?
A real "USAGE" output would be much nicer, even if it's quick to read 
the source here.



+exit 1
+fi
+
+# parameters
+kernel_submodule=$1
+kernel_patchdir=$2
+base_ref=$3
+
+cd "${kernel_submodule}"
+echo "clearing old exported patchqueue"
+rm -f "${top}/${kernel_patchdir}"/*.patch
+echo "exporting patchqueue using 'git format-patch [...] ${base_ref}.."
+git format-patch \
+--quiet \
+--no-numbered \
+--no-cover \
While git can expand this, I'd still use the full option name, for 
clarity and update-robustness sake.

--no-cover-letter


+--zero-commit \
+--output-dir \
+"${top}/${kernel_patchdir}" \
+"${base_ref}.."
+
+cd "${top}"
diff --git a/debian/scripts/import-patchqueue b/debian/scripts/import-patchqueue
new file mode 100755
index 000..b096fd5
--- /dev/null
+++ b/debian/scripts/import-patchqueue
@@ -0,0 +1,28 @@
+#!/bin/bash
+
+set -e
+
+top=$(pwd)
+
+if [ "$#" -lt 2 ]; then
+echo "at least two parameters required."


Same ;) maybe with an `|| "$#" -gt 3` check for to many parameters, with:
USAGE: $1   []

for example.

looks good, besides my small nits :)
And those could be easily followed up, if wanted.


+exit 1
+fi
+
+
+# parameters
+kernel_submodule=$1
+kernel_patchdir=$2
+if [[ -z "$3" ]]; then
+pq_branch='pq'
+else
+pq_branch=$3
+fi
+
+cd "${kernel_submodule}"
+echo "creating patchqeueue branch '${pq_branch}'"
+git checkout -b "${pq_branch}"
+echo "importing patches from '${kernel_patchdir}'"
+git am "${top}/${kernel_patchdir}"/*.patch
+
+cd "${top}"



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC manager 0/1] introduce pvesrd, replacing systemd pvesr.{timer, service}

2018-04-03 Thread Thomas Lamprecht
This is a POC of replacing the systemd timer which ran pvesr.service (an
exec of `pvesr run`) once a minute. The timer was not used, somehow, to
trigger pvesr on the correct time, as one could think, but rather as a
daemon replacement. our replication logic handles the "when must which
job run" logic completely itself.

Add a really small daemon, which forks of a process once a minute which
then runs the job. The end result should be quite identical with the
systemd timer variant, besides reducing the journal spam, as we do not
log 4 lines to the journal each minute, but rather only then when
something actually happens, i.e., a job runs.

There is still potential for improvements, some obvious are:
* produce a pvesrd manpage in the doc-generator, I have a placeholder
  one just for the sake of avoiding build/lintian errors.
* integrate this simple-daemon but with worker mix in PVE::Daemons, as I
* duplicate the logic to create and reap child processes a bit for now,
  in the RFC stage
* some possible improvements are described in the commit message of the
  patch itself.
* I had once an issue that the pvesrd.service was enabled, but not
  started when upgrading from an older system to test the transitions
  to a pvesr.timer-less system with replication jobs configured, did not
  really investigated this yet, it should get started by postinst
  though, just FYI.

I did this as side project, it isn't too important, but feedback would
be really great. Memory usage of this daemon is around 60 MB here
(perl...), KSM should make real impact much lower. But also with the
timer approach it wasn't smaller, it just had to be loaded into memory
each minute for a short period, not really better, IMO.

cheers,

Thomas Lamprecht (1):
  WIP: replace systemd timer with pvesrd daemon

 PVE/Service/Makefile  |   2 +-
 PVE/Service/pvesrd.pm | 102 ++
 bin/Makefile  |   6 ++-
 bin/init.d/Makefile   |   3 +-
 bin/init.d/pvesr.service  |   7 
 bin/init.d/pvesr.timer|  12 --
 bin/init.d/pvesrd.service |  16 
 bin/pvesrd|  20 +
 debian/postinst   |   3 +-
 9 files changed, 147 insertions(+), 24 deletions(-)
 create mode 100644 PVE/Service/pvesrd.pm
 delete mode 100644 bin/init.d/pvesr.service
 delete mode 100644 bin/init.d/pvesr.timer
 create mode 100644 bin/init.d/pvesrd.service
 create mode 100755 bin/pvesrd

-- 
2.14.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC manager 1/1] WIP: replace systemd timer with pvesrd daemon

2018-04-03 Thread Thomas Lamprecht
The whole thing is already prepared for this, the systemd timer was
just a fixed periodic timer with a frequency of one minute. And we
just introduced it as the assumption was made that less memory usage
would be generated with this approach, AFAIK.

But logging 4+ lines just about that the timer was started, even if
it does nothing, and that 24/7 is not to cheap and a bit annoying.

So in a first step add a simple daemon, which forks of a child for
running jobs once a minute.
This could be made still a bit more intelligent, i.e., look if we
have jobs tor run before forking - as forking is not the cheapest
syscall. Further, we could adapt the sleep interval to the next time
we actually need to run a job (and sending a SIGUSR to the daemon if
a job interval changes such, that this interval got narrower)

We try to sync running on minute-change boundaries at start, this
emulates systemd.timer behaviour, we had until now. Also user can
configure jobs on minute precision, so they probably expect that
those also start really close to a minute change event.
Could be adapted to resync during running, to factor in time drift.
But, as long as enough cpu cycles are available we run in correct
monotonic intervalls, so this isn't a must, IMO.

Another improvement could be locking a bit more fine grained, i.e.
not on a per-all-local-job-runs basis, but per-job (per-guest?)
basis, which would improve temporary starvement  of small
high-periodic jobs through big, less peridoci jobs.
We argued that it's the user fault if such situations arise, but they
can evolve over time without noticing, especially in compolexer
setups.

Signed-off-by: Thomas Lamprecht 
---
 PVE/Service/Makefile  |   2 +-
 PVE/Service/pvesrd.pm | 102 ++
 bin/Makefile  |   6 ++-
 bin/init.d/Makefile   |   3 +-
 bin/init.d/pvesr.service  |   7 
 bin/init.d/pvesr.timer|  12 --
 bin/init.d/pvesrd.service |  16 
 bin/pvesrd|  20 +
 debian/postinst   |   3 +-
 9 files changed, 147 insertions(+), 24 deletions(-)
 create mode 100644 PVE/Service/pvesrd.pm
 delete mode 100644 bin/init.d/pvesr.service
 delete mode 100644 bin/init.d/pvesr.timer
 create mode 100644 bin/init.d/pvesrd.service
 create mode 100755 bin/pvesrd

diff --git a/PVE/Service/Makefile b/PVE/Service/Makefile
index fc1cdb14..64f6be0d 100644
--- a/PVE/Service/Makefile
+++ b/PVE/Service/Makefile
@@ -1,6 +1,6 @@
 include ../../defines.mk
 
-SOURCES=pvestatd.pm pveproxy.pm pvedaemon.pm spiceproxy.pm
+SOURCES=pvestatd.pm pveproxy.pm pvedaemon.pm spiceproxy.pm pvesrd.pm
 
 all:
 
diff --git a/PVE/Service/pvesrd.pm b/PVE/Service/pvesrd.pm
new file mode 100644
index ..47030575
--- /dev/null
+++ b/PVE/Service/pvesrd.pm
@@ -0,0 +1,102 @@
+package PVE::Service::pvesrd;
+
+use strict;
+use warnings;
+
+use POSIX qw(WNOHANG);
+use PVE::SafeSyslog;
+use PVE::API2::Replication;
+
+use PVE::Daemon;
+use base qw(PVE::Daemon);
+
+my $cmdline = [$0, @ARGV];
+my %daemon_options = (stop_wait_time => 180, max_workers => 0);
+my $daemon = __PACKAGE__->new('pvesrd', $cmdline, %daemon_options);
+
+my $finish_jobs = sub {
+my ($self) = @_;
+foreach my $cpid (keys %{$self->{jobs}}) {
+   my $waitpid = waitpid($cpid, WNOHANG);
+   if (defined($waitpid) && ($waitpid == $cpid)) {
+   delete ($self->{jobs}->{$cpid});
+   }
+}
+};
+
+sub run {
+my ($self) = @_;
+
+my $jobs= {};
+$self->{jobs} = $jobs;
+
+my $old_sig_chld = $SIG{CHLD};
+local $SIG{CHLD} = sub {
+   local ($@, $!, $?); # do not overwrite error vars
+   $finish_jobs->($self);
+   $old_sig_chld->(@_) if $old_sig_chld;
+};
+
+my $logfunc = sub { syslog('info', $_[0]) };
+
+my $run_jobs = sub {
+   my $child = fork();
+   if (!defined($child)) {
+   die "fork failed: $!\n";
+   } elsif ($child == 0) {
+   $self->after_fork_cleanup();
+   PVE::API2::Replication::run_jobs(undef, $logfunc, 0, 1);
+   POSIX::_exit(0);
+   }
+
+   $jobs->{$child} = 1;
+};
+
+# try to run near minute boundaries, makes more sense to the user as he
+# configures jobs with minute precision
+my ($current_seconds) = localtime;
+sleep(60 - $current_seconds) if (60 - $current_seconds >= 5);
+
+for (;;) {
+   last if $self->{shutdown_request};
+
+   $run_jobs->();
+
+   my $sleep_time = 60;
+   my $slept = 0; # SIGCHLD interrupts sleep, so we need to keep track
+   while ($slept < $sleep_time) {
+   last if $self->{shutdown_request};
+   $slept += sleep($sleep_time - $slept);
+   }
+}
+
+# jobs have a lock timeout of 60s, wait a bit more for graceful termination
+my $timeout = 0;
+while (keys %$jobs > 0 && $timeout < 75) {
+   kill 'TERM', keys %$jobs;
+   $timeout += sleep(5);
+}
+# ensure the rest gets stopped
+kill 

Re: [pve-devel] [RFC manager 4/5] ui: add cluster join window POC

2018-04-03 Thread Dominik Csapak

On 04/03/2018 02:43 PM, Thomas Lamprecht wrote:

Am 04/03/2018 um 01:52 PM schrieb Dominik Csapak:

On 04/03/2018 10:49 AM, Thomas Lamprecht wrote:

Am 04/03/2018 um 10:16 AM schrieb Dominik Csapak:

even with the autoflush patch, i could not get this to work properly:

* i clicked join
* for some seconds nothing happened
* some things appeared (the other node, etc)
* all api calls returned an error with no further message

am i missing something still?


I'd guess so, it worked quite well here on a almost vanilly setup,
but hey, wouldn't be the first time :)

Do you have a task log on the node, in /var/log/pve/tasks/... ?
Anything in the journal?

You could also send me your test setups credentials in private and I 
could take a look.


i think i have to express myself better:

the clusterjoin itself worked without problems (i have a task entry, 
etc.) but the user experience was weird
as in, i had no feedback what is currently happening until i got api 
errors and had to guess that i have to reload




When looking at your test system I saw that while you did deploy 
pve-cluster from git,
including all needed commits, you did not do that with 
proxmox-widget-toolkit,

you're missing:
https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commitdiff;h=fde8e8bbb938a62a1b6265e588bec86c5c8036f0 


and also:
https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commitdiff;h=8d8dbfc5b9919e0e3871fd73cf5ea478b50e8628 



cannot work without those... From my cover-letter:

[...]
A widget-toolkit from git, with the taskDone patch included, is needed 
for

the 4/5 to work.


The taskviewer was never displayed as your window.edit version wasn't 
aware of the new showTaskViewer setting.
Also the join task done callback was never called, thus you did not get 
any UX help from my logic :)


oops, my bad :)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager 4/5] ui: add cluster join window POC

2018-04-03 Thread Thomas Lamprecht

Am 04/03/2018 um 01:52 PM schrieb Dominik Csapak:

On 04/03/2018 10:49 AM, Thomas Lamprecht wrote:

Am 04/03/2018 um 10:16 AM schrieb Dominik Csapak:

even with the autoflush patch, i could not get this to work properly:

* i clicked join
* for some seconds nothing happened
* some things appeared (the other node, etc)
* all api calls returned an error with no further message

am i missing something still?


I'd guess so, it worked quite well here on a almost vanilly setup,
but hey, wouldn't be the first time :)

Do you have a task log on the node, in /var/log/pve/tasks/... ?
Anything in the journal?

You could also send me your test setups credentials in private and I 
could take a look.


i think i have to express myself better:

the clusterjoin itself worked without problems (i have a task entry, 
etc.) but the user experience was weird
as in, i had no feedback what is currently happening until i got api 
errors and had to guess that i have to reload




When looking at your test system I saw that while you did deploy 
pve-cluster from git,
including all needed commits, you did not do that with 
proxmox-widget-toolkit,

you're missing:
https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commitdiff;h=fde8e8bbb938a62a1b6265e588bec86c5c8036f0
and also:
https://git.proxmox.com/?p=proxmox-widget-toolkit.git;a=commitdiff;h=8d8dbfc5b9919e0e3871fd73cf5ea478b50e8628

cannot work without those... From my cover-letter:

[...]
A widget-toolkit from git, with the taskDone patch included, is needed for
the 4/5 to work.


The taskviewer was never displayed as your window.edit version wasn't 
aware of the new showTaskViewer setting.
Also the join task done callback was never called, thus you did not get 
any UX help from my logic :)


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH librados2-perl] Split method pve_rados_connect

2018-04-03 Thread Dietmar Maurer
comments inline

> On March 30, 2018 at 12:25 PM Alwin Antreich  wrote:
> 
> 
> To be able to connect through librados2 without a config file, the
> method pve_rados_connect is split up into pve_rados_connect and
> pve_rados_conf_read_file.
> 
> Signed-off-by: Alwin Antreich 
> ---
>  PVE/RADOS.pm |  9 -
>  RADOS.xs | 26 +-
>  2 files changed, 29 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
> index aa6a102..ad1c2db 100644
> --- a/PVE/RADOS.pm
> +++ b/PVE/RADOS.pm
> @@ -1,6 +1,6 @@
>  package PVE::RADOS;
>  
> -use 5.014002;
> +use 5.014002; # FIXME: update version??

why this FIXME?

>  use strict;
>  use warnings;
>  use Carp;
> @@ -13,6 +13,7 @@ use PVE::RPCEnvironment;
>  require Exporter;
>  
>  my $rados_default_timeout = 5;
> +my $ceph_default_conf = '/etc/ceph/ceph.conf';
>  
>  
>  our @ISA = qw(Exporter);
> @@ -164,6 +165,12 @@ sub new {
>   $conn = pve_rados_create() ||
>   die "unable to create RADOS object\n";
>  
> + my $ceph_conf = delete $params{ceph_conf} || $ceph_default_conf;
> +
> + if (-e $ceph_conf) {
> + pve_rados_conf_read_file($conn, $ceph_conf);
> + }
> +

What if $params{ceph_conf} is set, but file does not exist? IMHO this should
raise an error
instead of using the default?

>   pve_rados_conf_set($conn, 'client_mount_timeout', $timeout);
>  
>   foreach my $k (keys %params) {
> diff --git a/RADOS.xs b/RADOS.xs
> index a9f6bc3..ad3cf96 100644
> --- a/RADOS.xs
> +++ b/RADOS.xs
> @@ -47,19 +47,35 @@ CODE:
>  }
>  
>  void
> -pve_rados_connect(cluster) 
> +pve_rados_conf_read_file(cluster, path)
>  rados_t cluster
> -PROTOTYPE: $
> +SV *path
> +PROTOTYPE: $$
>  CODE:
>  {
> -DPRINTF("pve_rados_connect\n");
> +char *p = NULL;
>  
> -int res = rados_conf_read_file(cluster, NULL);
> +if (SvOK(path)) {
> + p = SvPV_nolen(path);
> +}
> +
> +DPRINTF("pve_rados_conf_read_file %s\n", p);
> +
> +int res = rados_conf_read_file(cluster, p);


I thought we only want to call this if p != NULL ?

>  if (res < 0) {
>  die("rados_conf_read_file failed - %s\n", strerror(-res));
>  }
> +}
> +
> +void
> +pve_rados_connect(cluster)
> +rados_t cluster
> +PROTOTYPE: $
> +CODE:
> +{
> +DPRINTF("pve_rados_connect\n");
>  
> -res = rados_connect(cluster);
> +int res = rados_connect(cluster);
>  if (res < 0) {
>  die("rados_connect failed - %s\n", strerror(-res));
>  }
> -- 
> 2.11.0
> 
> 
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager 4/5] ui: add cluster join window POC

2018-04-03 Thread Dominik Csapak

On 04/03/2018 10:49 AM, Thomas Lamprecht wrote:

Am 04/03/2018 um 10:16 AM schrieb Dominik Csapak:

even with the autoflush patch, i could not get this to work properly:

* i clicked join
* for some seconds nothing happened
* some things appeared (the other node, etc)
* all api calls returned an error with no further message

am i missing something still?


I'd guess so, it worked quite well here on a almost vanilly setup,
but hey, wouldn't be the first time :)

Do you have a task log on the node, in /var/log/pve/tasks/... ?
Anything in the journal?

You could also send me your test setups credentials in private and I 
could take a look.


i think i have to express myself better:

the clusterjoin itself worked without problems (i have a task entry, 
etc.) but the user experience was weird
as in, i had no feedback what is currently happening until i got api 
errors and had to guess that i have to reload


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC kernel 0/2] pve-kernel helper scripts for patch-queue management

2018-04-03 Thread Fabian Grünbichler
this patch series introduces helper scripts for
- importing the exported patchqueue into a patchqueue branch inside the
  submodule
- exporting the (updated) patchqueue from the patchqueue branch inside the
  submodule
- importing a new upstream tag into the submodule, optionally rebasing the
  patchqueue

potential for future extensions:
- cherry-pick upstream commit(s) from linux-stable(-queue) or arbitrary
  trees/repos into patchqueue
- ... ? ;)

applicabale for pve-kernel-4.15 and master

sample run on top of current pve-kernel-4.15 for importing Ubuntu-4.15.0-14.15,
including rebasing the queue (and dropping two patches which have been applied
upstream):

--
$ debian/scripts/import-upstream-tag submodules/ubuntu-bionic patches/kernel 
Ubuntu-4.15.0-14.15 yes
checking for tag 'Ubuntu-4.15.0-14.15'
tag not found, fetching and retrying
From git://git.proxmox.com/git/mirror_ubuntu-bionic-kernel
 * [new tag]   Ubuntu-4.15.0-14.15 -> Ubuntu-4.15.0-14.15
tag found

automatic patchqueue rebase enabled
previous HEAD: 6dc5db97022239a3ce21df3f6a84dea5cdff1999

creating patchqeueue branch 'auto_pq/Ubuntu-4.15.0-14.15'
Switched to a new branch 'auto_pq/Ubuntu-4.15.0-14.15'
importing patches from 'patches/kernel'
Applying: Make mkcompile_h accept an alternate timestamp string
Applying: bridge: keep MAC of first assigned port
Applying: pci: Enable overrides for missing ACS capabilities (4.15)
Applying: kvm: disable default dynamic halt polling growth
Applying: ocfs2: make metadata estimation accurate and clear
Applying: ocfs2: try to reuse extent block in dealloc without meta_alloc
Applying: mm/shmem: do not wait for lock_page() in shmem_unused_huge_shrink()
Applying: mm/thp: Do not wait for lock_page() in deferred_split_scan()

rebasing patchqueue on top of 'Ubuntu-4.15.0-14.15'
First, rewinding head to replay your work on top of it...
Applying: Make mkcompile_h accept an alternate timestamp string
Applying: bridge: keep MAC of first assigned port
Applying: pci: Enable overrides for missing ACS capabilities (4.15)
Applying: kvm: disable default dynamic halt polling growth
Applying: ocfs2: make metadata estimation accurate and clear
Applying: ocfs2: try to reuse extent block in dealloc without meta_alloc

clearing old exported patchqueue
exporting patchqueue using 'git format-patch [...] Ubuntu-4.15.0-14.15..
cleaning up PQ branch 'auto_pq/Ubuntu-4.15.0-14.15'
HEAD is now at 6dc5db970222 UBUNTU: Ubuntu-4.15.0-13.14
Deleted branch auto_pq/Ubuntu-4.15.0-14.15 (was 9047121a601c).

checking out 'Ubuntu-4.15.0-14.15' in submodule

committing results
[pve-kernel-4.15 22f0ef8] update sources to Ubuntu-4.15.0-14.15
 1 file changed, 1 insertion(+), 1 deletion(-)
[pve-kernel-4.15 9a410ef] rebase patches on top of Ubuntu-4.15.0-14.15
 3 files changed, 3 insertions(+), 152 deletions(-)
 delete mode 100644 
patches/kernel/0007-mm-shmem-do-not-wait-for-lock_page-in-shmem_unused_h.patch
 delete mode 100644 
patches/kernel/0007-mm-thp-Do-not-wait-for-lock_page-in-deferred_split_s.patch

$ git log --stat --format=medium origin/pve-kernel-4.15..
commit 9a410ef6f15cb089b50212c6d73e0718447735ac
Author: Fabian Grünbichler 
Date:   Tue Apr 3 13:10:59 2018 +0200

rebase patches on top of Ubuntu-4.15.0-14.15

(generated with debian/scripts/import-upstream-tag)

Signed-off-by: Fabian Grünbichler 

 ...overrides-for-missing-ACS-capabilities-4..patch |   6 +-
 ...-not-wait-for-lock_page-in-shmem_unused_h.patch | 103 -
 ...ot-wait-for-lock_page-in-deferred_split_s.patch |  46 -
 3 files changed, 3 insertions(+), 152 deletions(-)

commit 22f0ef84aa01191fe751ef00a6de1f3eb7ebade6
Author: Fabian Grünbichler 
Date:   Tue Apr 3 13:10:59 2018 +0200

update sources to Ubuntu-4.15.0-14.15

(generated with debian/scripts/import-upstream-tag)

Signed-off-by: Fabian Grünbichler 

 submodules/ubuntu-bionic | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

commit c7ef647bc07ca719b1f0b2c1a05c43de56cbbebc
Author: Fabian Grünbichler 
Date:   Tue Apr 3 11:16:30 2018 +0200

debian/scripts: add import-upstream-tag

Signed-off-by: Fabian Grünbichler 

 debian/scripts/import-upstream-tag | 115 +
 1 file changed, 115 insertions(+)

commit 93fff928143b775f2ffa47840cca26e053e5cfed
Author: Fabian Grünbichler 
Date:   Tue Apr 3 11:16:06 2018 +0200

debian/scripts: add patchqueue scripts

Signed-off-by: Fabian Grünbichler 

 debian/scripts/export-patchqueue | 30 ++
 debian/scripts/import-patchqueue | 28 
 2 files changed, 58 insertions(+)
--

Fabian Grünbichler (2):
  debian/scripts: add patchqueue scripts
  

[pve-devel] [RFC kernel 2/2] debian/scripts: add import-upstream-tag

2018-04-03 Thread Fabian Grünbichler
$ import-upstream-tag path/to/kernel/submodule path/to/kernel/patches tag 
[rebase]

fetches 'tag' from default remote, optionally imports, rebases and exports
patchqueue, checks out 'tag' and commits the resulting changes.

Signed-off-by: Fabian Grünbichler 
---
 debian/scripts/import-upstream-tag | 115 +
 1 file changed, 115 insertions(+)
 create mode 100755 debian/scripts/import-upstream-tag

diff --git a/debian/scripts/import-upstream-tag 
b/debian/scripts/import-upstream-tag
new file mode 100755
index 000..59daa5b
--- /dev/null
+++ b/debian/scripts/import-upstream-tag
@@ -0,0 +1,115 @@
+#!/bin/bash
+
+set -e
+
+top=$(pwd)
+
+# parameters
+kernel_submodule=
+kernel_patchdir=
+new_tag=
+rebase=
+
+# generated based on new_tag
+pq_branch=
+# previously checked out in submodule
+old_ref=
+
+function cleanup_pq_branch {
+if [[ -n $pq_branch ]]; then
+   echo "cleaning up PQ branch '$pq_branch'"
+   cd "${top}/${kernel_submodule}"
+   git checkout --quiet $old_ref
+   git reset --hard
+   git branch -D "$pq_branch"
+fi
+}
+
+function error_exit {
+echo "$1"
+set +e
+
+cleanup_pq_branch
+
+cd "${top}"
+
+exit 1
+}
+
+if [ "$#" -lt 3 ]; then
+error_exit "at least three parameters required."
+fi
+
+kernel_submodule=$1
+if [ ! -d "${kernel_submodule}" ]; then
+error_exit "'${kernel_submodule}' must be a directory!"
+fi
+
+kernel_patchdir=$2
+if [ ! -d "${kernel_patchdir}" ]; then
+error_exit "'${kernel_patchdir}' must be a directory!"
+fi
+
+new_tag=$3
+rebase=$4
+
+if [[ -n $(git status --untracked-files=no --porcelain) ]]; then
+error_exit "working directory unclean, aborting"
+fi
+
+
+cd "${kernel_submodule}"
+## check for tag and fetch if needed
+echo "checking for tag '${new_tag}'"
+if [[ -z $(git tag -l "${new_tag}") ]]; then
+echo "tag not found, fetching and retrying"
+git fetch --tags
+fi
+if [[ -z $(git tag -l "${new_tag}") ]]; then
+error_exit "tag not found, aborting"
+fi
+echo "tag found"
+cd "${top}"
+
+if [[ -n "$rebase" ]]; then
+echo ""
+echo "automatic patchqueue rebase enabled"
+cd "${kernel_submodule}"
+## preparing patch queue branch
+old_ref=$(git rev-parse HEAD)
+pq_branch="auto_pq/${new_tag}"
+cd "${top}"
+
+echo "previous HEAD: ${old_ref}"
+
+echo ""
+"${top}/debian/scripts/import-patchqueue" "${kernel_submodule}" 
"${kernel_patchdir}" "${pq_branch}" || error_exit "failed to import patchqueue"
+
+cd "${kernel_submodule}"
+## rebase patches
+echo ""
+echo "rebasing patchqueue on top of '${new_tag}'"
+git rebase "${new_tag}"
+cd "${top}"
+
+## regenerate exported patch queue
+echo ""
+"${top}/debian/scripts/export-patchqueue" "${kernel_submodule}" 
"${kernel_patchdir}" "${new_tag}" || error_exit "failed to export patchqueue"
+
+cleanup_pq_branch
+cd "${top}"
+pq_branch=
+fi
+
+cd "${kernel_submodule}"
+echo ""
+echo "checking out '${new_tag}' in submodule"
+git checkout --quiet "${new_tag}"
+cd "${top}"
+
+echo ""
+echo "committing results"
+git commit --verbose -s -m "update sources to ${new_tag}" -m "(generated with 
debian/scripts/import-upstream-tag)" "${kernel_submodule}"
+if [[ -n "$rebase" ]]; then
+git commit --verbose -s -m "rebase patches on top of ${new_tag}" -m 
"(generated with debian/scripts/import-upstream-tag)" "${kernel_patchdir}"
+fi
-- 
2.14.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [RFC kernel 1/2] debian/scripts: add patchqueue scripts

2018-04-03 Thread Fabian Grünbichler
$ import-patchqueue repo dir [branch]

imports a (previously exported) patchqueue from 'dir' into a new branch
(default 'pq') in 'repo'

$ export-patchqueue repo dir ref

exports a patchqueue from 'ref' to current HEAD of 'repo' into 'dir'

'repo' can be any checked out non-bare repository, including worktrees or 
submodules.

Signed-off-by: Fabian Grünbichler 
---
note: these are separate scripts to help with manual rebasing and/or 
cherry-picking, and are called from import-upstream-tag from the next patch

 debian/scripts/export-patchqueue | 30 ++
 debian/scripts/import-patchqueue | 28 
 2 files changed, 58 insertions(+)
 create mode 100755 debian/scripts/export-patchqueue
 create mode 100755 debian/scripts/import-patchqueue

diff --git a/debian/scripts/export-patchqueue b/debian/scripts/export-patchqueue
new file mode 100755
index 000..976d128
--- /dev/null
+++ b/debian/scripts/export-patchqueue
@@ -0,0 +1,30 @@
+#!/bin/bash
+
+set -e
+
+top=$(pwd)
+
+if [ "$#" -ne 3 ]; then
+echo "three parameters required."
+exit 1
+fi
+
+# parameters
+kernel_submodule=$1
+kernel_patchdir=$2
+base_ref=$3
+
+cd "${kernel_submodule}"
+echo "clearing old exported patchqueue"
+rm -f "${top}/${kernel_patchdir}"/*.patch
+echo "exporting patchqueue using 'git format-patch [...] ${base_ref}.."
+git format-patch \
+--quiet \
+--no-numbered \
+--no-cover \
+--zero-commit \
+--output-dir \
+"${top}/${kernel_patchdir}" \
+"${base_ref}.."
+
+cd "${top}"
diff --git a/debian/scripts/import-patchqueue b/debian/scripts/import-patchqueue
new file mode 100755
index 000..b096fd5
--- /dev/null
+++ b/debian/scripts/import-patchqueue
@@ -0,0 +1,28 @@
+#!/bin/bash
+
+set -e
+
+top=$(pwd)
+
+if [ "$#" -lt 2 ]; then
+echo "at least two parameters required."
+exit 1
+fi
+
+
+# parameters
+kernel_submodule=$1
+kernel_patchdir=$2
+if [[ -z "$3" ]]; then
+pq_branch='pq'
+else
+pq_branch=$3
+fi
+
+cd "${kernel_submodule}"
+echo "creating patchqeueue branch '${pq_branch}'"
+git checkout -b "${pq_branch}"
+echo "importing patches from '${kernel_patchdir}'"
+git am "${top}/${kernel_patchdir}"/*.patch
+
+cd "${top}"
-- 
2.14.2


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager 4/5] ui: add cluster join window POC

2018-04-03 Thread Thomas Lamprecht

Am 04/03/2018 um 10:16 AM schrieb Dominik Csapak:

even with the autoflush patch, i could not get this to work properly:

* i clicked join
* for some seconds nothing happened
* some things appeared (the other node, etc)
* all api calls returned an error with no further message

am i missing something still?


I'd guess so, it worked quite well here on a almost vanilly setup,
but hey, wouldn't be the first time :)

Do you have a task log on the node, in /var/log/pve/tasks/... ?
Anything in the journal?

You could also send me your test setups credentials in private and I 
could take a look.


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 3/5] dc/Cluster: allow to get join information

2018-04-03 Thread Thomas Lamprecht

Am 04/03/2018 um 10:16 AM schrieb Dominik Csapak:

i guess it would be good to make this window non-resizable
because the fields do not really resize well



OK.


otherwise looks good

On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

Signed-off-by: Thomas Lamprecht 
---
  www/manager6/dc/Cluster.js | 20 +++
  www/manager6/dc/ClusterEdit.js | 76 
++

  2 files changed, 96 insertions(+)

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index 97f7496d..ca43c8f9 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -101,6 +101,18 @@ Ext.define('PVE.ClusterAdministration', {
  }
  }
  });
+    },
+
+    onClusterInfo: function() {
+    var vm = this.getViewModel();
+    var win = Ext.create('PVE.ClusterInfoWindow', {
+    joinInfo: {
+    ipAddress: vm.get('preferred_node.addr'),
+    fingerprint: vm.get('preferred_node.fp'),
+    totem: vm.get('totem')
+    }
+    });
+    win.show();
  }
  },
  tbar: [
@@ -111,6 +123,14 @@ Ext.define('PVE.ClusterAdministration', {
  bind: {
  disabled: '{isInCluster}'
  }
+    },
+    {
+    text: gettext('Join Information'),
+    reference: 'addButton',
+    handler: 'onClusterInfo',
+    bind: {
+    disabled: '{!isInCluster}'
+    }
  }
  ],
  layout: 'hbox',
diff --git a/www/manager6/dc/ClusterEdit.js 
b/www/manager6/dc/ClusterEdit.js

index 0c44ec44..249801c3 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -29,3 +29,79 @@ Ext.define('PVE.ClusterCreateWindow', {
  // TODO: for advanced options: ring1_addr
  ]
  });
+
+Ext.define('PVE.ClusterInfoWindow', {
+    extend: 'Ext.window.Window',
+    xtype: 'pveClusterInfoWindow',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    width: 800,
+    modal: true,
+    title: gettext('Cluster Join Information'),
+
+    joinInfo: {
+    ipAddress: undefined,
+    fingerprint: undefined,
+    totem: {}
+    },
+
+    items: [
+    {
+    xtype: 'component',
+    border: false,
+    padding: '10 10 10 10',
+    html: gettext("Copy the Join Information here and use it on 
the node you want to add.")

+    },
+    {
+    xtype: 'container',
+    layout: 'form',
+    border: false,
+    padding: '0 10 10 10',
+    items: [
+    {
+    xtype: 'textfield',
+    fieldLabel: gettext('IP Address'),
+    cbind: { value: '{joinInfo.ipAddress}' },
+    editable: false
+    },
+    {
+    xtype: 'textfield',
+    fieldLabel: gettext('Fingerprint'),
+    cbind: { value: '{joinInfo.fingerprint}' },
+    editable: false
+    },
+    {
+    xtype: 'textarea',
+    inputId: 'pveSerializedClusterInfo',
+    fieldLabel: gettext('Join Information'),
+    grow: true,
+    cbind: { joinInfo: '{joinInfo}' },
+    editable: false,
+    listeners: {
+    afterrender: function(field) {
+    if (!field.joinInfo) {
+    return;
+    }
+    var jsons = Ext.JSON.encode(field.joinInfo);
+    var base64s = Ext.util.Base64.encode(jsons);
+    field.setValue(base64s);
+    }
+    }
+    }
+    ]
+    }
+    ],
+    dockedItems: [{
+    dock: 'bottom',
+    xtype: 'toolbar',
+    items: [{
+    xtype: 'button',
+    handler: function(b) {
+    var el = document.getElementById('pveSerializedClusterInfo');
+    el.select();
+    document.execCommand("copy");
+    },
+    text: gettext('Copy Information')
+    }]
+    }]
+});


___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 2/5] dc/Cluster: allow cluster create over WebUI

2018-04-03 Thread Thomas Lamprecht

Am 04/03/2018 um 10:16 AM schrieb Dominik Csapak:

The window always allows me to click on create,
it would be better to only allow this if all necessary fields are filled
otherwise: great :)


Ah yes, just forgot to add the respective config to the cluster name 
component,
the API catches it still, but yes make totally sense to catch it already 
in the UI

and provide the user better UX. :)



On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

Signed-off-by: Thomas Lamprecht 
---
  www/manager6/Makefile  |  1 +
  www/manager6/dc/Cluster.js | 23 +++
  www/manager6/dc/ClusterEdit.js | 31 +++
  3 files changed, 55 insertions(+)
  create mode 100644 www/manager6/dc/ClusterEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 1f143061..d71803ee 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -193,6 +193,7 @@ JSSRC= \
  dc/Config.js    \
  dc/NodeView.js    \
  dc/Cluster.js    \
+    dc/ClusterEdit.js    \
  Workspace.js
    lint: ${JSSRC}
diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index e0c5663a..97f7496d 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -89,7 +89,30 @@ Ext.define('PVE.ClusterAdministration', {
  fp: nodeinfo.pve_fp
  });
  },
+
+    onCreate: function() {
+    var view = this.getView();
+    view.store.stopUpdate();
+    var win = Ext.create('PVE.ClusterCreateWindow', {
+    autoShow: true,
+    listeners: {
+    destroy: function() {
+    view.store.startUpdate();
+    }
+    }
+    });
+    }
  },
+    tbar: [
+    {
+    text: gettext('Create Cluster'),
+    reference: 'createButton',
+    handler: 'onCreate',
+    bind: {
+    disabled: '{isInCluster}'
+    }
+    }
+    ],
  layout: 'hbox',
  bodyPadding: 5,
  items: [
diff --git a/www/manager6/dc/ClusterEdit.js 
b/www/manager6/dc/ClusterEdit.js

new file mode 100644
index ..0c44ec44
--- /dev/null
+++ b/www/manager6/dc/ClusterEdit.js
@@ -0,0 +1,31 @@
+/*jslint confusion: true*/
+Ext.define('PVE.ClusterCreateWindow', {
+    extend: 'Proxmox.window.Edit',
+    xtype: 'pveClusterCreateWindow',
+
+    title: gettext('Create Cluster'),
+    width: 600,
+
+    method: 'POST',
+    url: '/cluster/config',
+
+    isCreate: true,
+    subject: gettext('Cluster'),
+    showTaskViewer: true,
+
+    items: [
+    {
+    xtype: 'textfield',
+    fieldLabel: gettext('Cluster Name'),
+    name: 'clustername'
+    },
+    {
+    xtype: 'proxmoxtextfield',
+    fieldLabel: gettext('Ring 0 Address'),
+    emptyText: gettext("Optional, defaults to IP resolved by 
node's hostname"),

+    name: 'ring0_addr',
+    skipEmptyText: true
+    }
+    // TODO: for advanced options: ring1_addr
+    ]
+});



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 1/5] dc: add simple cluster panel

2018-04-03 Thread Thomas Lamprecht

Am 04/03/2018 um 10:16 AM schrieb Dominik Csapak:

comments inline

On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

Show configured cluster nodes with their addresses, votes, IDs.
Also show cluster name, config_version, and node count.

Prepares for creating and joining a cluster over the WebUI.

Signed-off-by: Thomas Lamprecht 
---
  www/manager6/Makefile  |   1 +
  www/manager6/dc/Cluster.js | 202 
+

  www/manager6/dc/Config.js  |  13 ++-
  3 files changed, 212 insertions(+), 4 deletions(-)
  create mode 100644 www/manager6/dc/Cluster.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index ac9f6481..1f143061 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -192,6 +192,7 @@ JSSRC= \
  dc/SecurityGroups.js    \
  dc/Config.js    \
  dc/NodeView.js    \
+    dc/Cluster.js    \
  Workspace.js
    lint: ${JSSRC}
diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
new file mode 100644
index ..e0c5663a
--- /dev/null
+++ b/www/manager6/dc/Cluster.js
@@ -0,0 +1,202 @@
+/*jslint confusion: true*/
+Ext.define('pve-cluster-nodes', {
+    extend: 'Ext.data.Model',
+    fields: [
+    'node', { type: 'integer', name: 'nodeid' }, 'ring0_addr', 
'ring1_addr',

+    { type: 'integer', name: 'quorum_votes' }
+    ],
+    proxy: {
+    type: 'proxmox',
+    url: "/api2/json/cluster/config/nodes"
+    },
+    idProperty: 'nodeid'
+});
+
+Ext.define('pve-cluster-info', {
+    extend: 'Ext.data.Model',
+    proxy: {
+    type: 'proxmox',
+    url: "/api2/json/cluster/config/join"
+    }
+});
+
+Ext.define('PVE.ClusterAdministration', {
+    extend: 'Ext.panel.Panel',
+    xtype: 'pveClusterAdministration',
+
+    title: gettext('Cluster Administration'),
+
+    border: false,
+    defaults: { border: false },
+
+    viewModel: {
+    parent: null,
+    data: {
+    totem: {},
+    nodelist: [],
+    preferred_node: {
+    name: '',
+    fp: '',
+    addr: ''
+    },
+    isInCluster: false,
+    nodecount: 0
+    }
+    },
+
+    items: [
+    {
+    xtype: 'panel',
+    title: gettext('Cluster Information'),
+    controller: {
+    xclass: 'Ext.app.ViewController',
+
+    init: function(view) {
+    view.store = Ext.create('Proxmox.data.UpdateStore', {
+    autoStart: true,
+    interval: 15 * 1000,
+    storeid: 'pve-cluster-info',
+    model: 'pve-cluster-info'
+    });
+    view.store.on('load', this.onLoad, this);
+    },
+
+    onLoad: function(store, records, success) {
+    var vm = this.getViewModel();
+    if (!success || !records || !records[0].data) {
+    vm.set('totem', {});
+    vm.set('isInCluster', false);
+    vm.set('nodelist', []);
+    vm.set('preferred_node', {
+    name: '',
+    addr: '',
+    fp: ''
+    });
+    return;
+    }
+    var data = records[0].data;
+    vm.set('totem', data.totem);
+    vm.set('isInCluster', !!data.totem.cluster_name);
+    vm.set('nodelist', data.nodelist);
+
+    var nodeinfo = Ext.Array.findBy(data.nodelist, function 
(el) {

+    return el.name === data.preferred_node;
+    });
+
+    vm.set('preferred_node', {
+    name: data.preferred_node,
+    addr: nodeinfo.pve_addr,
+    fp: nodeinfo.pve_fp
+    });
+    },
+    },
+    layout: 'hbox',
+    bodyPadding: 5,
+    items: [
+    {
+    xtype: 'displayfield',
+    fieldLabel: gettext('Cluster Name'),
+    bind: {
+    value: '{totem.cluster_name}',
+    hidden: '{!isInCluster}'
+    },
+    flex: 1
+    },
+    {
+    xtype: 'displayfield',
+    fieldLabel: gettext('Config Version'),
+    bind: {
+    value: '{totem.config_version}',
+    hidden: '{!isInCluster}'
+    },
+    flex: 1
+    },
+    {
+    xtype: 'displayfield',
+    fieldLabel: gettext('Number of Nodes'),
+    labelWidth: 120,
+    bind: {
+    value: '{nodecount}',
+    hidden: '{!isInCluster}'
+    },
+    flex: 1
+    },
+    {
+    xtype: 'displayfield',
+    value: gettext('No cluster configured'),


i would prefer to reuse the gettext from the dashboard
(AFAIR it is  'Standalone node - no cluster defined')
just for consistency



OK.


+    bind: {
+    hidden: '{isInCluster}'
+    },
+    flex: 1
+    }
+    ]
+    },
+    {
+    xtype: 'grid',
+    title: gettext('Cluster Nodes'),
+    controller: {
+    xclass: 

Re: [pve-devel] [PATCH librados2-perl] Split method pve_rados_connect

2018-04-03 Thread Thomas Lamprecht


Am 03/30/2018 um 12:25 PM schrieb Alwin Antreich:

To be able to connect through librados2 without a config file, the
method pve_rados_connect is split up into pve_rados_connect and
pve_rados_conf_read_file.

Signed-off-by: Alwin Antreich 
---
  PVE/RADOS.pm |  9 -
  RADOS.xs | 26 +-
  2 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/PVE/RADOS.pm b/PVE/RADOS.pm
index aa6a102..ad1c2db 100644
--- a/PVE/RADOS.pm
+++ b/PVE/RADOS.pm
@@ -1,6 +1,6 @@
  package PVE::RADOS;
  
-use 5.014002;

+use 5.014002; # FIXME: update version??
  use strict;
  use warnings;
  use Carp;
@@ -13,6 +13,7 @@ use PVE::RPCEnvironment;
  require Exporter;
  
  my $rados_default_timeout = 5;

+my $ceph_default_conf = '/etc/ceph/ceph.conf';
  
  
  our @ISA = qw(Exporter);

@@ -164,6 +165,12 @@ sub new {
$conn = pve_rados_create() ||
die "unable to create RADOS object\n";
  
+	my $ceph_conf = delete $params{ceph_conf} || $ceph_default_conf;

+
+   if (-e $ceph_conf) {
+   pve_rados_conf_read_file($conn, $ceph_conf);
+   }
+
pve_rados_conf_set($conn, 'client_mount_timeout', $timeout);
  
  	foreach my $k (keys %params) {

diff --git a/RADOS.xs b/RADOS.xs
index a9f6bc3..ad3cf96 100644
--- a/RADOS.xs
+++ b/RADOS.xs
@@ -47,19 +47,35 @@ CODE:


This whole hunk does not apply here...
A quick look gave me one whitespace problem (see below), but that alone 
did not fix it for me...

Are you sure you sent all commits between this and origin/master ?

git log origin/master..



  }
  
  void

-pve_rados_connect(cluster)
+pve_rados_conf_read_file(cluster, path)
  rados_t cluster
-PROTOTYPE: $
+SV *path
+PROTOTYPE: $$
  CODE:
  {
-DPRINTF("pve_rados_connect\n");
+char *p = NULL;
  
-int res = rados_conf_read_file(cluster, NULL);

+if (SvOK(path)) {
+   p = SvPV_nolen(path);
+}
+
+DPRINTF("pve_rados_conf_read_file %s\n", p);
+
+int res = rados_conf_read_file(cluster, p);
  if (res < 0) {
  die("rados_conf_read_file failed - %s\n", strerror(-res));
  }
+}
+
+void
+pve_rados_connect(cluster)
+rados_t cluster
+PROTOTYPE: $
+CODE:
+{
+DPRINTF("pve_rados_connect\n");
  


The empty line above contains a trailing whitespace in origin/master, 
which your patch does not contain.



-res = rados_connect(cluster);
+int res = rados_connect(cluster);
  if (res < 0) {
  die("rados_connect failed - %s\n", strerror(-res));
  }



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 3/5] dc/Cluster: allow to get join information

2018-04-03 Thread Dominik Csapak

i guess it would be good to make this window non-resizable
because the fields do not really resize well

otherwise looks good

On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

Signed-off-by: Thomas Lamprecht 
---
  www/manager6/dc/Cluster.js | 20 +++
  www/manager6/dc/ClusterEdit.js | 76 ++
  2 files changed, 96 insertions(+)

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index 97f7496d..ca43c8f9 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -101,6 +101,18 @@ Ext.define('PVE.ClusterAdministration', {
}
}
});
+   },
+
+   onClusterInfo: function() {
+   var vm = this.getViewModel();
+   var win = Ext.create('PVE.ClusterInfoWindow', {
+   joinInfo: {
+   ipAddress: vm.get('preferred_node.addr'),
+   fingerprint: vm.get('preferred_node.fp'),
+   totem: vm.get('totem')
+   }
+   });
+   win.show();
}
},
tbar: [
@@ -111,6 +123,14 @@ Ext.define('PVE.ClusterAdministration', {
bind: {
disabled: '{isInCluster}'
}
+   },
+   {
+   text: gettext('Join Information'),
+   reference: 'addButton',
+   handler: 'onClusterInfo',
+   bind: {
+   disabled: '{!isInCluster}'
+   }
}
],
layout: 'hbox',
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index 0c44ec44..249801c3 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -29,3 +29,79 @@ Ext.define('PVE.ClusterCreateWindow', {
// TODO: for advanced options: ring1_addr
  ]
  });
+
+Ext.define('PVE.ClusterInfoWindow', {
+extend: 'Ext.window.Window',
+xtype: 'pveClusterInfoWindow',
+mixins: ['Proxmox.Mixin.CBind'],
+
+width: 800,
+modal: true,
+title: gettext('Cluster Join Information'),
+
+joinInfo: {
+   ipAddress: undefined,
+   fingerprint: undefined,
+   totem: {}
+},
+
+items: [
+   {
+   xtype: 'component',
+   border: false,
+   padding: '10 10 10 10',
+   html: gettext("Copy the Join Information here and use it on the node you 
want to add.")
+   },
+   {
+   xtype: 'container',
+   layout: 'form',
+   border: false,
+   padding: '0 10 10 10',
+   items: [
+   {
+   xtype: 'textfield',
+   fieldLabel: gettext('IP Address'),
+   cbind: { value: '{joinInfo.ipAddress}' },
+   editable: false
+   },
+   {
+   xtype: 'textfield',
+   fieldLabel: gettext('Fingerprint'),
+   cbind: { value: '{joinInfo.fingerprint}' },
+   editable: false
+   },
+   {
+   xtype: 'textarea',
+   inputId: 'pveSerializedClusterInfo',
+   fieldLabel: gettext('Join Information'),
+   grow: true,
+   cbind: { joinInfo: '{joinInfo}' },
+   editable: false,
+   listeners: {
+   afterrender: function(field) {
+   if (!field.joinInfo) {
+   return;
+   }
+   var jsons = Ext.JSON.encode(field.joinInfo);
+   var base64s = Ext.util.Base64.encode(jsons);
+   field.setValue(base64s);
+   }
+   }
+   }
+   ]
+   }
+],
+dockedItems: [{
+   dock: 'bottom',
+   xtype: 'toolbar',
+   items: [{
+   xtype: 'button',
+   handler: function(b) {
+   var el = document.getElementById('pveSerializedClusterInfo');
+   el.select();
+   document.execCommand("copy");
+   },
+   text: gettext('Copy Information')
+   }]
+}]
+});




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH manager 2/5] dc/Cluster: allow cluster create over WebUI

2018-04-03 Thread Dominik Csapak

The window always allows me to click on create,
it would be better to only allow this if all necessary fields are filled
otherwise: great :)

On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

Signed-off-by: Thomas Lamprecht 
---
  www/manager6/Makefile  |  1 +
  www/manager6/dc/Cluster.js | 23 +++
  www/manager6/dc/ClusterEdit.js | 31 +++
  3 files changed, 55 insertions(+)
  create mode 100644 www/manager6/dc/ClusterEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 1f143061..d71803ee 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -193,6 +193,7 @@ JSSRC=  
\
dc/Config.js\
dc/NodeView.js  \
dc/Cluster.js   \
+   dc/ClusterEdit.js   \
Workspace.js
  
  lint: ${JSSRC}

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index e0c5663a..97f7496d 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -89,7 +89,30 @@ Ext.define('PVE.ClusterAdministration', {
fp: nodeinfo.pve_fp
});
},
+
+   onCreate: function() {
+   var view = this.getView();
+   view.store.stopUpdate();
+   var win = Ext.create('PVE.ClusterCreateWindow', {
+   autoShow: true,
+   listeners: {
+   destroy: function() {
+   view.store.startUpdate();
+   }
+   }
+   });
+   }
},
+   tbar: [
+   {
+   text: gettext('Create Cluster'),
+   reference: 'createButton',
+   handler: 'onCreate',
+   bind: {
+   disabled: '{isInCluster}'
+   }
+   }
+   ],
layout: 'hbox',
bodyPadding: 5,
items: [
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
new file mode 100644
index ..0c44ec44
--- /dev/null
+++ b/www/manager6/dc/ClusterEdit.js
@@ -0,0 +1,31 @@
+/*jslint confusion: true*/
+Ext.define('PVE.ClusterCreateWindow', {
+extend: 'Proxmox.window.Edit',
+xtype: 'pveClusterCreateWindow',
+
+title: gettext('Create Cluster'),
+width: 600,
+
+method: 'POST',
+url: '/cluster/config',
+
+isCreate: true,
+subject: gettext('Cluster'),
+showTaskViewer: true,
+
+items: [
+   {
+   xtype: 'textfield',
+   fieldLabel: gettext('Cluster Name'),
+   name: 'clustername'
+   },
+   {
+   xtype: 'proxmoxtextfield',
+   fieldLabel: gettext('Ring 0 Address'),
+   emptyText: gettext("Optional, defaults to IP resolved by node's 
hostname"),
+   name: 'ring0_addr',
+   skipEmptyText: true
+   }
+   // TODO: for advanced options: ring1_addr
+]
+});




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [RFC manager 4/5] ui: add cluster join window POC

2018-04-03 Thread Dominik Csapak

even with the autoflush patch, i could not get this to work properly:

* i clicked join
* for some seconds nothing happened
* some things appeared (the other node, etc)
* all api calls returned an error with no further message

am i missing something still?

the rest is really good (the copy/paste feature is great :) )

On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

Signed-off-by: Thomas Lamprecht 
---
  www/manager6/dc/Cluster.js |  21 +
  www/manager6/dc/ClusterEdit.js | 190 +
  2 files changed, 211 insertions(+)

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
index ca43c8f9..11e24c66 100644
--- a/www/manager6/dc/Cluster.js
+++ b/www/manager6/dc/Cluster.js
@@ -113,6 +113,19 @@ Ext.define('PVE.ClusterAdministration', {
}
});
win.show();
+   },
+
+   onJoin: function() {
+   var view = this.getView();
+   view.store.stopUpdate();
+   var win = Ext.create('PVE.ClusterJoinNodeWindow', {
+   autoShow: true,
+   listeners: {
+   destroy: function() {
+   view.store.startUpdate();
+   }
+   }
+   });
}
},
tbar: [
@@ -131,6 +144,14 @@ Ext.define('PVE.ClusterAdministration', {
bind: {
disabled: '{!isInCluster}'
}
+   },
+   {
+   text: gettext('Join Cluster'),
+   reference: 'joinButton',
+   handler: 'onJoin',
+   bind: {
+   disabled: '{isInCluster}'
+   }
}
],
layout: 'hbox',
diff --git a/www/manager6/dc/ClusterEdit.js b/www/manager6/dc/ClusterEdit.js
index 249801c3..25ca6607 100644
--- a/www/manager6/dc/ClusterEdit.js
+++ b/www/manager6/dc/ClusterEdit.js
@@ -105,3 +105,193 @@ Ext.define('PVE.ClusterInfoWindow', {
}]
  }]
  });
+
+Ext.define('PVE.ClusterJoinNodeWindow', {
+extend: 'Proxmox.window.Edit',
+xtype: 'pveClusterJoinNodeWindow',
+
+title: gettext('Cluster Join'),
+width: 800,
+
+method: 'POST',
+url: '/cluster/config/join',
+
+defaultFocus: 'textarea[name=serializedinfo]',
+isCreate: true,
+submitText: gettext('Join'),
+showTaskViewer: true,
+
+onlineHelp: 'chapter_pvecm',
+
+viewModel: {
+   parent: null,
+   data: {
+   info: {
+   fp: '',
+   ip: '',
+   ring1Possible: false,
+   ring1Needed: false
+   }
+   }
+},
+
+controller: {
+   xclass: 'Ext.app.ViewController',
+   control: {
+   'proxmoxcheckbox[name=assistedInput]': {
+   change: 'onInputTypeChange'
+   },
+   'textarea[name=serializedinfo]': {
+   change: 'recomputeSerializedInfo',
+   enable: 'resetField'
+   },
+   'proxmoxtextfield[name=ring1_addr]': {
+   enable: 'ring1Needed'
+   },
+   'textfield': {
+   disable: 'resetField'
+   }
+   },
+   resetField: function(field) {
+   field.reset();
+   },
+   ring1Needed: function(f) {
+   var vm = this.getViewModel();
+   f.allowBlank = !vm.get('info.ring1Needed');
+   },
+   onInputTypeChange: function(field, assistedInput) {
+   var vm = this.getViewModel();
+   if (!assistedInput) {
+   vm.set('info.ring1Possible', true);
+   }
+   },
+   recomputeSerializedInfo: function(field, value) {
+   var vm = this.getViewModel();
+   var jsons = Ext.util.Base64.decode(value);
+   var joinInfo = Ext.JSON.decode(jsons, true);
+
+   var info = {
+   fp: '',
+   ring1Needed: false,
+   ring1Possible: false,
+   ip: ''
+   };
+
+   var totem = {};
+   if (!(joinInfo && joinInfo.totem)) {
+   field.valid = false;
+   } else {
+   info = {
+   ip: joinInfo.ipAddress,
+   fp: joinInfo.fingerprint,
+   ring1Possible: !!joinInfo.totem['interface']['1'],
+   ring1Needed: !!joinInfo.totem['interface']['1']
+   };
+   totem = joinInfo.totem;
+   field.valid = true;
+   }
+
+   vm.set('info', info);
+   }
+},
+
+taskDone: function(success) {
+   if (success) {
+   var txt = gettext('Cluster join task finished, node certificate may 
have changed, reload GUI!');
+   // ensure user cannot do harm
+   

Re: [pve-devel] [PATCH manager 1/5] dc: add simple cluster panel

2018-04-03 Thread Dominik Csapak

comments inline

On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

Show configured cluster nodes with their addresses, votes, IDs.
Also show cluster name, config_version, and node count.

Prepares for creating and joining a cluster over the WebUI.

Signed-off-by: Thomas Lamprecht 
---
  www/manager6/Makefile  |   1 +
  www/manager6/dc/Cluster.js | 202 +
  www/manager6/dc/Config.js  |  13 ++-
  3 files changed, 212 insertions(+), 4 deletions(-)
  create mode 100644 www/manager6/dc/Cluster.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index ac9f6481..1f143061 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -192,6 +192,7 @@ JSSRC=  
\
dc/SecurityGroups.js\
dc/Config.js\
dc/NodeView.js  \
+   dc/Cluster.js   \
Workspace.js
  
  lint: ${JSSRC}

diff --git a/www/manager6/dc/Cluster.js b/www/manager6/dc/Cluster.js
new file mode 100644
index ..e0c5663a
--- /dev/null
+++ b/www/manager6/dc/Cluster.js
@@ -0,0 +1,202 @@
+/*jslint confusion: true*/
+Ext.define('pve-cluster-nodes', {
+extend: 'Ext.data.Model',
+fields: [
+   'node', { type: 'integer', name: 'nodeid' }, 'ring0_addr', 'ring1_addr',
+   { type: 'integer', name: 'quorum_votes' }
+],
+proxy: {
+type: 'proxmox',
+   url: "/api2/json/cluster/config/nodes"
+},
+idProperty: 'nodeid'
+});
+
+Ext.define('pve-cluster-info', {
+extend: 'Ext.data.Model',
+proxy: {
+type: 'proxmox',
+   url: "/api2/json/cluster/config/join"
+}
+});
+
+Ext.define('PVE.ClusterAdministration', {
+extend: 'Ext.panel.Panel',
+xtype: 'pveClusterAdministration',
+
+title: gettext('Cluster Administration'),
+
+border: false,
+defaults: { border: false },
+
+viewModel: {
+   parent: null,
+   data: {
+   totem: {},
+   nodelist: [],
+   preferred_node: {
+   name: '',
+   fp: '',
+   addr: ''
+   },
+   isInCluster: false,
+   nodecount: 0
+   }
+},
+
+items: [
+   {
+   xtype: 'panel',
+   title: gettext('Cluster Information'),
+   controller: {
+   xclass: 'Ext.app.ViewController',
+
+   init: function(view) {
+   view.store = Ext.create('Proxmox.data.UpdateStore', {
+   autoStart: true,
+   interval: 15 * 1000,
+   storeid: 'pve-cluster-info',
+   model: 'pve-cluster-info'
+   });
+   view.store.on('load', this.onLoad, this);
+   },
+
+   onLoad: function(store, records, success) {
+   var vm = this.getViewModel();
+   if (!success || !records || !records[0].data) {
+   vm.set('totem', {});
+   vm.set('isInCluster', false);
+   vm.set('nodelist', []);
+   vm.set('preferred_node', {
+   name: '',
+   addr: '',
+   fp: ''
+   });
+   return;
+   }
+   var data = records[0].data;
+   vm.set('totem', data.totem);
+   vm.set('isInCluster', !!data.totem.cluster_name);
+   vm.set('nodelist', data.nodelist);
+
+   var nodeinfo = Ext.Array.findBy(data.nodelist, function 
(el) {
+   return el.name === data.preferred_node;
+   });
+
+   vm.set('preferred_node', {
+   name: data.preferred_node,
+   addr: nodeinfo.pve_addr,
+   fp: nodeinfo.pve_fp
+   });
+   },
+   },
+   layout: 'hbox',
+   bodyPadding: 5,
+   items: [
+   {
+   xtype: 'displayfield',
+   fieldLabel: gettext('Cluster Name'),
+   bind: {
+   value: '{totem.cluster_name}',
+   hidden: '{!isInCluster}'
+   },
+   flex: 1
+   },
+   {
+   xtype: 'displayfield',
+   fieldLabel: gettext('Config Version'),
+   bind: {
+   value: '{totem.config_version}',
+   hidden: '{!isInCluster}'
+   },
+   flex: 1
+   },
+   {
+   xtype: 'displayfield',
+   fieldLabel: gettext('Number of Nodes'),
+   labelWidth: 

Re: [pve-devel] [PATCH manager 0/5] cluster create/join UI

2018-04-03 Thread Dominik Csapak

Looks great, a few things are still rough,
please refer to my mails to the relevant patches

On 03/27/2018 03:45 PM, Thomas Lamprecht wrote:

First 3 patches (1/5, 2/5, 3/5) should be quite polished and could be, if
deemed OK, already applied.
The last two, 4/5 and 5/5, are more a WIP/POC but they should work also
quite well now.
I'm not to happy with the UI of 4/5, but my wizard-based one isn't working
yet, and some feed back on this would be appreciated now.
A widget-toolkit from git, with the taskDone patch included, is needed for
the 4/5 to work.

cheers,
Thomas

Thomas Lamprecht (5):
   dc: add simple cluster panel
   dc/Cluster: allow cluster create over WebUI
   dc/Cluster: allow to get join information
   ui: add cluster join window POC
   ui: silence auth failures during cluster join

  www/manager6/Makefile  |   2 +
  www/manager6/Workspace.js  |   2 +-
  www/manager6/dc/Cluster.js | 266 +++
  www/manager6/dc/ClusterEdit.js | 309 +
  www/manager6/dc/Config.js  |  13 +-
  5 files changed, 587 insertions(+), 5 deletions(-)
  create mode 100644 www/manager6/dc/Cluster.js
  create mode 100644 www/manager6/dc/ClusterEdit.js




___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH v2] add APT hook to prevent proxmox-ve removal

2018-04-03 Thread Thomas Lamprecht

applied, thanks!

Am 04/03/2018 um 09:18 AM schrieb Fabian Grünbichler:

since this happens quite regularly when users accidentally install
conflicting packages.

sample output:
$ apt remove pve-qemu-kvm
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
   libxml-libxml-perl proxmox-widget-toolkit pve-edk2-firmware pve-i18n 
pve-xtermjs
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
   proxmox-ve pve-container pve-ha-manager pve-ha-manager-dbgsym pve-manager 
pve-qemu-kvm qemu-server spiceterm
0 upgraded, 0 newly installed, 8 to remove and 0 not upgraded.
After this operation, 37.6 MB disk space will be freed.
Do you want to continue? [Y/n] y
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really you want to permanently remove 'proxmox-ve' 
from your system, run the following command
W: (pve-apt-hook)   touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) and repeat your apt-get/apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please 
verify
W: (pve-apt-hook)   - your APT repository settings
W: (pve-apt-hook)   - that you are using 'apt-get dist-upgrade' or 'apt 
full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook

Signed-off-by: Fabian Grünbichler 
---
Changes since v1:
- remove $check_file if it exists, to restore default behaviour if proxmox-ve
   ever gets re-installed (thanks Thomas for the suggestion!)

  debian/apthook/10pveapthook |  4 +++
  debian/apthook/pve-apt-hook | 73 +
  debian/proxmox-ve.install   |  2 ++
  3 files changed, 79 insertions(+)
  create mode 100644 debian/apthook/10pveapthook
  create mode 100755 debian/apthook/pve-apt-hook

diff --git a/debian/apthook/10pveapthook b/debian/apthook/10pveapthook
new file mode 100644
index 000..b7ae649
--- /dev/null
+++ b/debian/apthook/10pveapthook
@@ -0,0 +1,4 @@
+DPkg::Pre-Install-Pkgs { "/usr/share/proxmox-ve/pve-apt-hook"; };
+DPkg::Tools::Options::/usr/share/proxmox-ve/pve-apt-hook "";
+DPkg::Tools::Options::/usr/share/proxmox-ve/pve-apt-hook::Version "2";
+DPkg::Tools::Options::/usr/share/proxmox-ve/pve-apt-hook::InfoFD "20";
diff --git a/debian/apthook/pve-apt-hook b/debian/apthook/pve-apt-hook
new file mode 100755
index 000..f925090
--- /dev/null
+++ b/debian/apthook/pve-apt-hook
@@ -0,0 +1,73 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use File::Basename;
+
+my $fd = $ENV{APT_HOOK_INFO_FD};
+my $check_file = '/please-remove-proxmox-ve';
+my $check_package = 'proxmox-ve';
+my $hook_name = basename($0);
+
+my $log = sub {
+  my ($line) = @_;
+  print "W: ($hook_name) $line";
+};
+
+if (!defined $fd || $fd == 0) {
+  $log->("APT_HOOK_INFO_FD not correctly defined, skipping apt-pve-hook 
checks\n");
+  exit 0;
+}
+
+open(my $fh, "<&=${fd}") or die "E: could not open APT_HOOK_INFO_FD (${fd}) - 
$!\n";
+
+my $cleanup = sub {
+  my ($rc) = @_;
+
+  close($fh);
+  exit $rc;
+};
+
+chomp (my $ver = <$fh>);
+if ($ver ne "VERSION 2") {
+  $log->("apt-pve-hook misconfigured, expecting hook protocol version 2\n");
+  $cleanup->(0);
+}
+
+my $blank;
+while (my $line = <$fh>) {
+  chomp $line;
+
+  if (!defined($blank)) {
+$blank = 1 if !$line;
+next;
+  }
+
+  my ($pkg, $old, $dir, $new, $action) = (split / /, $line, 5);
+  if (!defined($action)) {
+$log->("apt-pve-hook encountered unexpected line: $line\n");
+next;
+  }
+
+  if ($pkg eq 'proxmox-ve' && $action eq '**REMOVE**') {
+if (-e $check_file) {
+  $log->("'$check_file' exists, proceeding with removal of package 
'${check_package}'\n");
+  unlink $check_file;
+} else {
+  $log->("!! WARNING !!\n");
+  $log->("You are attempting to remove the meta-package 
'${check_package}'!\n");
+  $log->("\n");
+  $log->("If you really you want to permanently remove '${check_package}' from 
your system, run the following command\n");
+  $log->("\ttouch '${check_file}'\n");
+  $log->("and repeat your apt-get/apt invocation.\n");
+  $log->("\n");
+  $log->("If you are unsure why '$check_package' would be removed, please 
verify\n");
+  $log->("\t- your APT repository settings\n");
+  $log->("\t- that you are using 'apt-get dist-upgrade' or 'apt full-upgrade' 
to upgrade your system\n");
+  $cleanup->(1);
+}
+  }
+}
+
+$cleanup->(0);
diff --git a/debian/proxmox-ve.install b/debian/proxmox-ve.install
index 6ac09f5..13d16c4 100644
--- a/debian/proxmox-ve.install
+++ b/debian/proxmox-ve.install
@@ -1 +1,3 @@
  debian/proxmox-ve-release-5.x.gpg etc/apt/trusted.gpg.d/
+debian/apthook/10pveapthook 

[pve-devel] applied: [PATCH cluster] pvecm join: also default to resolved IP with use_ssh param

2018-04-03 Thread Fabian Grünbichler

___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v2] add APT hook to prevent proxmox-ve removal

2018-04-03 Thread Fabian Grünbichler
since this happens quite regularly when users accidentally install
conflicting packages.

sample output:
$ apt remove pve-qemu-kvm
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libxml-libxml-perl proxmox-widget-toolkit pve-edk2-firmware pve-i18n 
pve-xtermjs
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
  proxmox-ve pve-container pve-ha-manager pve-ha-manager-dbgsym pve-manager 
pve-qemu-kvm qemu-server spiceterm
0 upgraded, 0 newly installed, 8 to remove and 0 not upgraded.
After this operation, 37.6 MB disk space will be freed.
Do you want to continue? [Y/n] y
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really you want to permanently remove 'proxmox-ve' 
from your system, run the following command
W: (pve-apt-hook)   touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) and repeat your apt-get/apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please 
verify
W: (pve-apt-hook)   - your APT repository settings
W: (pve-apt-hook)   - that you are using 'apt-get dist-upgrade' or 'apt 
full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook

Signed-off-by: Fabian Grünbichler 
---
Changes since v1:
- remove $check_file if it exists, to restore default behaviour if proxmox-ve
  ever gets re-installed (thanks Thomas for the suggestion!)

 debian/apthook/10pveapthook |  4 +++
 debian/apthook/pve-apt-hook | 73 +
 debian/proxmox-ve.install   |  2 ++
 3 files changed, 79 insertions(+)
 create mode 100644 debian/apthook/10pveapthook
 create mode 100755 debian/apthook/pve-apt-hook

diff --git a/debian/apthook/10pveapthook b/debian/apthook/10pveapthook
new file mode 100644
index 000..b7ae649
--- /dev/null
+++ b/debian/apthook/10pveapthook
@@ -0,0 +1,4 @@
+DPkg::Pre-Install-Pkgs { "/usr/share/proxmox-ve/pve-apt-hook"; };
+DPkg::Tools::Options::/usr/share/proxmox-ve/pve-apt-hook "";
+DPkg::Tools::Options::/usr/share/proxmox-ve/pve-apt-hook::Version "2";
+DPkg::Tools::Options::/usr/share/proxmox-ve/pve-apt-hook::InfoFD "20";
diff --git a/debian/apthook/pve-apt-hook b/debian/apthook/pve-apt-hook
new file mode 100755
index 000..f925090
--- /dev/null
+++ b/debian/apthook/pve-apt-hook
@@ -0,0 +1,73 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use File::Basename;
+
+my $fd = $ENV{APT_HOOK_INFO_FD};
+my $check_file = '/please-remove-proxmox-ve';
+my $check_package = 'proxmox-ve';
+my $hook_name = basename($0);
+
+my $log = sub {
+  my ($line) = @_;
+  print "W: ($hook_name) $line";
+};
+
+if (!defined $fd || $fd == 0) {
+  $log->("APT_HOOK_INFO_FD not correctly defined, skipping apt-pve-hook 
checks\n");
+  exit 0;
+}
+
+open(my $fh, "<&=${fd}") or die "E: could not open APT_HOOK_INFO_FD (${fd}) - 
$!\n";
+
+my $cleanup = sub {
+  my ($rc) = @_;
+
+  close($fh);
+  exit $rc;
+};
+
+chomp (my $ver = <$fh>);
+if ($ver ne "VERSION 2") {
+  $log->("apt-pve-hook misconfigured, expecting hook protocol version 2\n");
+  $cleanup->(0);
+}
+
+my $blank;
+while (my $line = <$fh>) {
+  chomp $line;
+
+  if (!defined($blank)) {
+$blank = 1 if !$line;
+next;
+  }
+
+  my ($pkg, $old, $dir, $new, $action) = (split / /, $line, 5);
+  if (!defined($action)) {
+$log->("apt-pve-hook encountered unexpected line: $line\n");
+next;
+  }
+
+  if ($pkg eq 'proxmox-ve' && $action eq '**REMOVE**') {
+if (-e $check_file) {
+  $log->("'$check_file' exists, proceeding with removal of package 
'${check_package}'\n");
+  unlink $check_file;
+} else {
+  $log->("!! WARNING !!\n");
+  $log->("You are attempting to remove the meta-package 
'${check_package}'!\n");
+  $log->("\n");
+  $log->("If you really you want to permanently remove '${check_package}' 
from your system, run the following command\n");
+  $log->("\ttouch '${check_file}'\n");
+  $log->("and repeat your apt-get/apt invocation.\n");
+  $log->("\n");
+  $log->("If you are unsure why '$check_package' would be removed, please 
verify\n");
+  $log->("\t- your APT repository settings\n");
+  $log->("\t- that you are using 'apt-get dist-upgrade' or 'apt 
full-upgrade' to upgrade your system\n");
+  $cleanup->(1);
+}
+  }
+}
+
+$cleanup->(0);
diff --git a/debian/proxmox-ve.install b/debian/proxmox-ve.install
index 6ac09f5..13d16c4 100644
--- a/debian/proxmox-ve.install
+++ b/debian/proxmox-ve.install
@@ -1 +1,3 @@
 debian/proxmox-ve-release-5.x.gpg etc/apt/trusted.gpg.d/
+debian/apthook/10pveapthook etc/apt/apt.conf.d/
+debian/apthook/pve-apt-hook usr/share/proxmox-ve/
-- 
2.14.2



Re: [pve-devel] [PATCH cluster] pvecm join: also default to resolved IP with use_ssh param

2018-04-03 Thread Thomas Lamprecht

Any objections? Else I'd apply this later today myself...
It syncs the behaviour from SSH join with API join and create,
so it isn't new untested code.

Am 03/29/2018 um 11:06 AM schrieb Thomas Lamprecht:

We already switched to this behaviour in pvecm create and pvecm join
(with API) but did not changed it for the case when a user requested
to use the old method to join with --use_ssh.

Signed-off-by: Thomas Lamprecht 
---
  data/PVE/CLI/pvecm.pm | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/data/PVE/CLI/pvecm.pm b/data/PVE/CLI/pvecm.pm
index 7e16586..23a15a9 100755
--- a/data/PVE/CLI/pvecm.pm
+++ b/data/PVE/CLI/pvecm.pm
@@ -107,6 +107,7 @@ __PACKAGE__->register_method ({
my $nodename = PVE::INotify::nodename();
  
  	my $host = $param->{hostname};

+   my $local_ip_address = remote_node_ip($nodename);
  
  	PVE::Cluster::assert_joinable($param->{ring0_addr}, $param->{ring1_addr}, $param->{force});
  
@@ -150,7 +151,7 @@ __PACKAGE__->register_method ({
  
  	push @$cmd, '--nodeid', $param->{nodeid} if $param->{nodeid};

push @$cmd, '--votes', $param->{votes} if defined($param->{votes});
-   push @$cmd, '--ring0_addr', $param->{ring0_addr} if 
defined($param->{ring0_addr});
+   push @$cmd, '--ring0_addr', $param->{ring0_addr} // 
$local_ip_address;
push @$cmd, '--ring1_addr', $param->{ring1_addr} if 
defined($param->{ring1_addr});
  
  	if (system (@$cmd) != 0) {



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH cluster] API/Cluster: autoflush STDOUT for join and create

2018-04-03 Thread Thomas Lamprecht

Any comments?

Am 03/27/2018 um 08:08 AM schrieb Thomas Lamprecht:

We're in a forked worker here, so STDOUT isn't connected to a
(pseudo)TTY directly, so perl flushes only when it's intewrnal buffer
is full.

Ensure each line gets flushed out to the API client in use to give
immediate feedback about the operation.

For example, our WebUIs Task Viewer won't show anything without this
quite a bit of time, you may even get logged out before the flush
from the perl side happens, which is simply bad UX.

Signed-off-by: Thomas Lamprecht 
---
  data/PVE/API2/ClusterConfig.pm | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/data/PVE/API2/ClusterConfig.pm b/data/PVE/API2/ClusterConfig.pm
index ea253b5..ad7e8c6 100644
--- a/data/PVE/API2/ClusterConfig.pm
+++ b/data/PVE/API2/ClusterConfig.pm
@@ -121,6 +121,7 @@ __PACKAGE__->register_method ({
my $authuser = $rpcenv->get_user();
  
  	my $code = sub {

+   STDOUT->autoflush();
PVE::Cluster::setup_sshd_config(1);
PVE::Cluster::setup_rootsshconfig();
PVE::Cluster::setup_ssh_keys();
@@ -512,6 +513,7 @@ __PACKAGE__->register_method ({
my $authuser = $rpcenv->get_user();
  
  	my $worker = sub {

+   STDOUT->autoflush();
PVE::Tools::lock_file($local_cluster_lock, 10, 
\::Cluster::join, $param);
die $@ if $@;
};



___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel