Greetings Master Michael,

I have and continue to monitor and admire this body of code.

Visual Bash can be used to generate more functions and I look forward to actually using sections of the Zoom and Zuess tree for many projects - even beyond zLinux.

Congratulations and...

Kindest Regards,

Flint

On Sun, 22 Oct 2017, Michael MacIsaac wrote:

Date: Sun, 22 Oct 2017 08:28:22 -0400
From: Michael MacIsaac <mike99...@gmail.com>
Reply-To: Linux on 390 Port <LINUX-390@VM.MARIST.EDU>
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Announcing zoom and zuess 3.0

---- Oops, sorry, re-posting without the extra newlines ----

Hello linux-390 and IBMBM lists,

zoom (System z object-oriented management) and zuess (System z user-enabled
self-service) are open-source packages that provide "Private Cloud" on IBM
mainframe hardware, the z/VM hipervisor and the GNU/Linux operating system
- arguably the most solid and mature virtualization trio on earth. They
provide three interfaces for systems management of Linux on the mainframe:
 1) Command line (zoom)
 2) Web UI       (zuess)
 3) RESTful API  (zuess)

Together, they create a "self-service portal", where end users can build
new Linux systems*, rebuild*, destroy* power-on and off, add and remove
CPUs and memory non-disruptively, and report on these systems. It also has
a complete "cookbook" with a user guide and command reference. (* some code
has to be written by the user)

Also supported are:
 -) An authentication/authorization mechanism for operations on data or
systems
 -) Ability to run Linux or z/VM commands and copy files with
"passwordless" SSH
 -) Inline Web editing of metadata fields "description" and "owner"
 -) Quotas for CPUs and memory used by Linux group
 -) A second level arbitrary grouping mechanism below Linux groups
 -) A locking mechanism for systems being operated on
 -) User preferences
 -) z/VM DASD, FCP and OSA device reporting
 -) Monitoring, Software as a Service and live guest relocation are works
in progress

The main Web page is https://sourceforge.net/projects/system-zoom/
The PDF:
https://sourceforge.net/projects/system-zoom/files/zoom.pdf/download
The download page is https://sourceforge.net/projects/system-zoom/files/

You should see six files available for download:
 README.txt             Basic information - this file
 zoom-3-XX.s390x.rpm    The "back-end" RPM  - CLI to be installed on a
zLinux system
 zuess-3-XX.s390x.rpm   The "front-end" RPM - Web UI to be installed on
same zLinux system
 zoom.pdf               The documentation - if you just want to read about
it
 zoom.tgz               Tar file with zoom code - if you just want to see
the CLI code
 zuess.tgz              Tar file with zuess code - if you just want to see
the UI code

The version was bumped to 3 because of the new "zoom tree" data structure.
To create a zoom cluster of multiple z/VM systems, one zoom server on one
z/VM system calls the "zaddserver" command to combine another zoom server
on a different z/VM LPAR. The first server becomes to be the "primary", and
the second server becomes the "secondary". The primary server can then add
any number of additional servers on different z/VM LPARs which all become
"tertiary". The primary and secondary servers each maintain an identical
copy of the complete zoom tree. Each tertiary server only maintains a tree
with clients on that z/VM LPAR. Think of this as an active-active
configuration. Should the primary server fail, the secondary will be a hot
standby. It should be possible to create a Virtual IP address (VIP) that
would load balance between the two servers.  For data reliability and
operation integrity, when clients (managed guests) are being operated on,
they are locked. So if two administrators try to reboot the same client,
the first one will succeed, and the second one will get a “system is
locked” message. Locks are maintained on the primary sever, but if it is
down, they are maintained on the secondary.

The zuess Web User Interface was designed with the goal of giving users in
an organization Self-Service portal to z/VM and zLinux resources. It
requires that a Web server be running on the zoom server (only Apache has
been tested).

With version 3.0, access to the Web UI is split into two categories
(1) Read-only - the home page and any other page that does not perform
operations is in a directory (usually /srv/www/cgi-bin/) open to all users.
No audit trail  is needed, so no user/group information is available.

(2) Read-write - any page that performs any operation or makes any change
to the tree is in a directory that is password protected (usually
/srv/www/cgi-bin/zuess/).  Once the user supplies valid credentials, the
user name is maintained along with their primary and secondary groups. An
audit trail is maintained in the zoom log file (usually /var/log/zoom.log).

OK, that's probably enough description.  Phil Tully and I presented on this
at the last SHARE and MVMUA.  Both were pretty well received.
Unfortunately, no more presentations are planned at this time.

If anyone does get it set up, please let me know.  What would be great is a
contribution of some Linux/VM code to do build (clone to new VM), rebuild
(clone to existing) and destroy. All of those operations are outlined in
the file /usr/local/src/userexits.stubs. It would have to be copied to
userexits.local. What we do in house is put a message on the zoom server's
console, IBM OpsMgr traps it and calls the appropriate REXX EXEC.  I'd
imagine the sample code could use PROP instead.  A free six-pack of the
beer of your choice to the first person to do that!

A great goal would be to get to a series of three one hour labs where (1)
Linux/zoom/zuess are installed, (2) a zoom tree is created and the Web UI
set up, and (3) all the teams in the class cluster the zoom servers
together.  Wouldn't that be cool to come back from SHARE or VM Workshop and
say "Yeah, I set up Private Cloud on the mainframe in three hours with open
source tools"?  Simultaneously, that exercise would help drive out the bugs
which are certainly there.

C'mon community, stop letting "distributed" beat us at Cloud!

   -Mike MacIsaac

On Sun, Oct 22, 2017 at 7:41 AM, Michael MacIsaac <mike99...@gmail.com>
wrote:

Hello linux-390 and IBMBM lists,

zoom (System z object-oriented management) and zuess (System z
user-enabled self-service) are
open-source packages that provide "Private Cloud" on IBM mainframe
hardware, the z/VM hipervisor
and the GNU/Linux operating system - arguably the most solid and mature
virtualization trio on
earth. They provide three interfaces for systems management of Linux on
the mainframe:
  1) Command line (zoom)
  2) Web UI       (zuess)
  3) RESTful API  (zuess)

Together, they create a "self-service portal", where end users can build
new Linux systems*,
rebuild*, destroy* power-on and off, add and remove CPUs and memory
non-disruptively, and report
on these systems. It also has a complete "cookbook" with a user guide and
command reference.
(* some code has to be written by the user)

Also supported are:
  -) An authentication/authorization mechanism for operations on data or
systems
  -) Ability to run Linux or z/VM commands and copy files with
"passwordless" SSH
  -) Inline Web editing of metadata fields "description" and "owner"
  -) Quotas for CPUs and memory used by Linux group
  -) A second level arbitrary grouping mechanism below Linux groups
  -) A locking mechanism for systems being operated on
  -) User preferences
  -) z/VM DASD, FCP and OSA device reporting
  -) Monitoring, Software as a Service and live guest relocation are works
in progress

The main Web page is https://sourceforge.net/projects/system-zoom/
The document in PDF: https://sourceforge.net/projects/system-zoom/files/
zoom.pdf/download
The download page is https://sourceforge.net/projects/system-zoom/files/

You should see six files available for download:
  README.txt             Basic information - this file
  zoom-3-XX.s390x.rpm    The "back-end" RPM  - CLI to be installed on a
zLinux system
  zuess-3-XX.s390x.rpm   The "front-end" RPM - Web UI to be installed on
same zLinux system
  zoom.pdf               The documentation - if you just want to read
about it
  zoom.tgz               Tar file with zoom code - if you just want to see
the CLI code
  zuess.tgz              Tar file with zuess code - if you just want to
see the UI code

The version was bumped to 3 because of the new "zoom tree" data
structure.
To create a zoom cluster of multiple z/VM systems, one zoom server on one
z/VM system calls
the "zaddserver" command to combine another zoom server on a different
z/VM LPAR. The first
server becomes to be the "primary", and the second server becomes the
"secondary". The primary
server can then add any number of additional servers on different z/VM
LPARs which all become
"tertiary". The primary and secondary servers each maintain an identical
copy of the complete zoom
tree. Each tertiary server only maintains a tree with clients on that z/VM
LPAR. Think of this as
an active-active configuration. Should the primary server fail, the
secondary will be a hot
standby. It should be possible to create a Virtual IP address (VIP) that
would load balance between
the two servers.  For data reliability and operation integrity, when
clients (managed guests) are
being operated on, they are locked. So if two administrators try to reboot
the same client, the
first one will succeed, and the second one will get a “system is locked”
message. Locks are
maintained on the primary sever, but if it is down, they are maintained on
the secondary.

The zuess Web User Interface was designed with the goal of giving users in
an organization
Self-Service portal to z/VM and zLinux resources. It requires that a Web
server be running on
the zoom server (only Apache has been tested).

With version 3.0, access to the Web UI is split into two categories
 (1) Read-only - the home page and any other page that does not perform
operations is in a
 directory (usually /srv/www/cgi-bin/) open to all users. No audit trail
is needed, so no
 user/group information is available.

 (2) Read-write - any page that performs any operation or makes any change
to the tree is in
 a directory that is password protected (usually
/srv/www/cgi-bin/zuess/).  Once the user
 supplies valid credentials, the user name is maintained along with their
primary and secondary
 groups. An audit trail is maintained in the zoom log file (usually
/var/log/zoom.log).

OK, that's probably enough description.  Phil Tully and I presented on
this at the last SHARE
and MVMUA.  Both were pretty well received. Unfortunately, no more
presentations are planned
at this time.

If anyone does get it set up, please let me know.  What would be great is
a contribution of
some Linux/VM code to do build (clone to new VM), rebuild (clone to
existing) and destroy. All
of those operations are outlined in the file /usr/local/src/userexits.stubs.
It would have to
be copied to userexits.local. What we do in house is put a message on the
zoom server's console,
IBM OpsMgr traps it and calls the appropriate REXX EXEC.  I'd imagine the
sample code could
use PROP instead.  A free six-pack of the beer of your choice to the first
person to do that!

A great goal would be to get to a series of three one hour labs where (1)
Linux/zoom/zuess
are installed, (2) a zoom tree is created and the Web UI set up, and (3)
all the teams in
the class cluster the zoom servers together.  Wouldn't that be cool to
come back from SHARE
or VM Workshop and say "Yeah, I set up Private Cloud on the mainframe in
three hours with
open source tools"?  Simultaneously, that exercise would help drive out
the bugs which are
certainly there.

C'mon community, stop letting "distributed" beat us at Cloud!

--
     -Mike MacIsaac




--
    -Mike MacIsaac

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Kindest Regards,



☮ Paul Flint
(802) 479-2360 Home
(802) 595-9365 Cell

/************************************
Based upon email reliability concerns,
please send an acknowledgement in response to this note.

Paul Flint
17 Averill Street
Barre, VT
05641

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to