Hello community,

here is the log from the commit of package sesdev for openSUSE:Leap:15.2 
checked in at 2020-05-28 20:10:59
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2/sesdev (Old)
 and      /work/SRC/openSUSE:Leap:15.2/.sesdev.new.3606 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "sesdev"

Thu May 28 20:10:59 2020 rev:7 rq:809546 version:1.3.0+1590413709.g4ad4e03

Changes:
--------
--- /work/SRC/openSUSE:Leap:15.2/sesdev/sesdev.changes  2020-05-11 
08:38:55.982970528 +0200
+++ /work/SRC/openSUSE:Leap:15.2/.sesdev.new.3606/sesdev.changes        
2020-05-28 20:11:02.439156963 +0200
@@ -1,0 +2,23 @@
+Mon May 25 13:35:16 UTC 2020 - Nathan Cutler <[email protected]>
+
+- Update to 1.3.0+1590413709.g4ad4e03:
+  + upstream 1.3.0 release (2020-05-25)
+    * octopus/ses7: added "--stop-before-ceph-orch-apply" function (PR #301)
+    * Implement RGW deployment in octopus, ses7 (PR #314)
+    * ceph_salt_deployment: do not force user to change dashboard pw (PR #315)
+    * makecheck: possibly prophylactically downgrade libudev1 (PR #317)
+    * contrib/standalone.sh: --no-stop-on-failure option (PR #318)
+    * ceph_salt_deployment: make use of 'cephadm' role (PR #319)
+    * octopus/ses7: removed "--deploy-mons", "--deploy-mgrs", "--deploy-osds",
+      "--deploy-mdss" (replaced by "--stop-before-ceph-orch-apply") (PR #301)
+    * seslib: drop Containers module from SES7 deployment (PR #303)
+    * provision.sh: remove curl RPM from the environment (PR #311)
+    * Fixed "sesdev create caasp4" default deployment by disabling 
multi-master (PR #302)
+    * ceph_salt_deployment: do not deploy MDS if no mds roles present (PR #313)
+    * caasp: do not install salt (PR #320)
+    * supportconfig: handle both scc and nts tarball prefixes (PR 
+    * seslib: convert certain public methods into private (PR #309)
+    * caasp4: rename "storage" role to "nfs" and drop it from default 4-node
+      deployment (PR #310)
+
+-------------------------------------------------------------------

Old:
----
  sesdev-1.2.0+1588616857.gaa3df4c.tar.gz

New:
----
  sesdev-1.3.0+1590413709.g4ad4e03.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ sesdev.spec ++++++
--- /var/tmp/diff_new_pack.yrZcu9/_old  2020-05-28 20:11:02.967158535 +0200
+++ /var/tmp/diff_new_pack.yrZcu9/_new  2020-05-28 20:11:02.971158546 +0200
@@ -16,7 +16,7 @@
 #
 
 Name:           sesdev
-Version:        1.2.0+1588616857.gaa3df4c
+Version:        1.3.0+1590413709.g4ad4e03
 Release:        1%{?dist}
 Summary:        CLI tool to deploy and manage SES clusters
 License:        MIT

++++++ sesdev-1.2.0+1588616857.gaa3df4c.tar.gz -> 
sesdev-1.3.0+1590413709.g4ad4e03.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/CHANGELOG.md 
new/sesdev-1.3.0+1590413709.g4ad4e03/CHANGELOG.md
--- old/sesdev-1.2.0+1588616857.gaa3df4c/CHANGELOG.md   2020-05-04 
20:27:36.899887709 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/CHANGELOG.md   2020-05-25 
15:35:09.522914969 +0200
@@ -3,10 +3,39 @@
 All notable changes to this project will be documented in this file.
 
 The format is based on [Keep a 
Changelog](https://keepachangelog.com/en/1.0.0/),
-and this project adheres to [Semantic 
Versioning](https://semver.org/spec/v2.0.0.html).
+and this project aspires to adhere to [Semantic 
Versioning](https://semver.org/spec/v2.0.0.html).
 
 ## [Unreleased]
 
+## [1.3.0] - 2020-05-25
+
+### Added
+- octopus/ses7: added "--stop-before-ceph-orch-apply" function (PR #301)
+- Implement RGW deployment in octopus, ses7 (PR #314)
+- ceph_salt_deployment: do not force user to change dashboard pw (PR #315)
+- makecheck: possibly prophylactically downgrade libudev1 (PR #317)
+- contrib/standalone.sh: --no-stop-on-failure option (PR #318)
+- ceph_salt_deployment: make use of 'cephadm' role (PR #319)
+
+### Removed
+- octopus/ses7: removed "--deploy-mons", "--deploy-mgrs", "--deploy-osds",
+  "--deploy-mdss" (replaced by "--stop-before-ceph-orch-apply") (PR #301)
+- seslib: drop Containers module from SES7 deployment (PR #303)
+- provision.sh: remove curl RPM from the environment (PR #311)
+
+### Fixed
+- Fixed "sesdev create caasp4" default deployment by disabling multi-master
+  (PR #302)
+- ceph_salt_deployment: do not deploy MDS if no mds roles present (PR #313)
+- caasp: do not install salt (PR #320)
+- supportconfig: handle both scc and nts tarball prefixes (PR #323)
+- qa: work around cephadm MGR co-location issue (PR #324)
+
+### Changed
+- seslib: convert certain public methods into private (PR #309)
+- caasp4: rename "storage" role to "nfs" and drop it from default 4-node
+  deployment (PR #310)
+
 ## [1.2.0] - 2020-05-04
 
 ### Added
@@ -355,7 +384,8 @@
 - Minimal README with a few usage instructions.
 - The CHANGELOG file.
 
-[unreleased]: https://github.com/SUSE/sesdev/compare/v1.2.0...HEAD
+[unreleased]: https://github.com/SUSE/sesdev/compare/v1.3.0...HEAD
+[1.3.0]: https://github.com/SUSE/sesdev/releases/tag/v1.3.0
 [1.2.0]: https://github.com/SUSE/sesdev/releases/tag/v1.2.0
 [1.1.12]: https://github.com/SUSE/sesdev/releases/tag/v1.1.12
 [1.1.11]: https://github.com/SUSE/sesdev/releases/tag/v1.1.11
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/README.md 
new/sesdev-1.3.0+1590413709.g4ad4e03/README.md
--- old/sesdev-1.2.0+1588616857.gaa3df4c/README.md      2020-05-04 
20:27:36.899887709 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/README.md      2020-05-25 
15:35:09.522914969 +0200
@@ -329,24 +329,34 @@
 * `admin` - signifying that the node should get ceph.conf and keyring [1]
 * `bootstrap` - The node where `cephadm bootstrap` will be run
 * `client` - Various Ceph client utilities
-* `nfs` - NFS (Ganesha) gateway
-* `grafana` - Grafana metrics visualization (requires Prometheus) [2]
+* `nfs` - NFS (Ganesha) gateway [2]
+* `grafana` - Grafana metrics visualization (requires Prometheus) [3]
 * `igw` - iSCSI target gateway
 * `mds` - CephFS MDS
 * `mgr` - Ceph Manager instance
 * `mon` - Ceph Monitor instance
-* `prometheus` - Prometheus monitoring [2]
+* `prometheus` - Prometheus monitoring [3]
 * `rgw` - Ceph Object Gateway
-* `storage` - OSD storage daemon
+* `storage` - OSD storage daemon [4]
 * `suma` - SUSE Manager (octopus only)
 
-[1] CAVEAT: sesdev applies the "admin" role to all nodes, regardless of whether
+[1] CAVEAT: sesdev applies the `admin` role to all nodes, regardless of whether
 or not the user specified it explicitly on the command line or in 
`config.yaml`.
 
-[2] CAVEAT: Do not specify "prometheus"/"grafana" roles for ses5 deployments.
+[2] The `nfs` role may also be used when deploying a CaaSP cluster. In that
+case we get a node acting as an NFS server as well as a pod running in the k8s
+cluster and acting as an NFS client, providing a persistent store for other
+(containerized) applications.
+
+[3] CAVEAT: Do not specify `prometheus`/`grafana` roles for ses5 deployments.
 The DeepSea version shipped with SES5 always deploys Prometheus and Grafana
-instances on the master node, but does not recognize "prometheus"/"grafana"
-roles in policy.cfg.
+instances on the master node, but does not recognize `prometheus`/`grafana`
+roles in `policy.cfg`.
+
+[4] Please note that we do not need the `storage` role when we plan to deploy
+Rook/Ceph over CaaSP. By default, Rook creates OSD pods which take over any
+spare block devices in worker nodes, i.e., all block devices but the first
+(OS disk) of any given worker.
 
 The following example will generate a cluster with four nodes: the master (Salt
 Master) node that is also running a MON daemon; a storage (OSD) node that
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/contrib/nukenetz.sh 
new/sesdev-1.3.0+1590413709.g4ad4e03/contrib/nukenetz.sh
--- old/sesdev-1.2.0+1588616857.gaa3df4c/contrib/nukenetz.sh    2020-05-04 
20:27:36.899887709 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/contrib/nukenetz.sh    2020-05-25 
15:35:09.522914969 +0200
@@ -12,7 +12,11 @@
     INTERACTIVE=""
 fi
 
-NETZ="$(sudo virsh net-list | egrep -v '\-|Persistent|^$' | cut -d' ' -f2)"
+NETZ="$(sudo virsh net-list | egrep -v '\-\-\-|Persistent|vagrant-libvirt|^$' 
| cut -d' ' -f2)"
+if [ -z "$NETZ" ] ; then
+    echo "No netz to nuke"
+    exit 0
+fi
 YES="non_empty_value"
 if [ "$INTERACTIVE" ] ; then
     echo "Will nuke the following virtual networks:"
@@ -22,8 +26,8 @@
     echo -en "Are you sure? (y/N) "
     read YES
     ynlc="${YES,,}"
-    ynlcfc="${yrlc:0:1}"
-    if [ -z "$YES" ] || [ "$yrlcfc" = "n" ] ; then
+    ynlcfc="${ynlc:0:1}"
+    if [ -z "$YES" ] || [ "$ynlcfc" = "n" ] ; then
         YES=""
     else
         YES="non_empty_value"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/contrib/standalone.sh 
new/sesdev-1.3.0+1590413709.g4ad4e03/contrib/standalone.sh
--- old/sesdev-1.2.0+1588616857.gaa3df4c/contrib/standalone.sh  2020-05-04 
20:27:36.899887709 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/contrib/standalone.sh  2020-05-25 
15:35:09.522914969 +0200
@@ -54,6 +54,8 @@
     echo "    --makecheck              Run makecheck (install-deps.sh, 
actually)"
     echo "                             tests"
     echo "    --nautilus               Run nautilus deployment tests"
+    echo "    --no-stop-on-failure     Continue script execution if there is a"
+    echo "                             failure (default: stop immediately)."
     echo "    --octopus                Run octopus deployment tests"
     echo "    --pacific                Run pacific deployment tests"
     echo "    --ses5                   Run ses5 deployment tests"
@@ -78,7 +80,7 @@
         echo -en "\nExit status: 0 (PASS)\n" >> "$FINAL_REPORT"
     else
         echo -en "\nExit status: $exit_status (FAIL)\n" >> "$FINAL_REPORT"
-        final_report
+        [ "$STOP_ON_FAILURE" ] && final_report
     fi  
 }
 
@@ -118,7 +120,7 @@
 }
 
 TEMP=$(getopt -o h \
---long 
"help,caasp4,ceph-salt-from-source,full,makecheck,nautilus,octopus,pacific,ses5,ses6,ses7"
 \
+--long 
"help,caasp4,ceph-salt-from-source,full,makecheck,nautilus,no-stop-on-failure,octopus,pacific,ses5,ses6,ses7"
 \
 -n 'standalone.sh' -- "$@") || ( echo "Terminating..." >&2 ; exit 1 )
 eval set -- "$TEMP"
 
@@ -134,6 +136,7 @@
 SES5=""
 SES6=""
 SES7=""
+STOP_ON_FAILURE="not_empty"
 while true ; do
     case "$1" in
         --caasp4)                CAASP4="--caasp4" ; shift ;;
@@ -141,6 +144,7 @@
         --full)                  FULL="$1" ; shift ;;
         --makecheck)             MAKECHECK="$1" ; shift ;;
         --nautilus)              NAUTILUS="$1" ; shift ;;
+        --no-stop-on-failure)    STOP_ON_FAILURE="" ; shift ;;
         --octopus)               OCTOPUS="$1" ; shift ;;
         --pacific)               PACIFIC="$1" ; shift ;;
         --ses5)                  SES5="$1" ; shift ;;
@@ -182,6 +186,8 @@
     run_cmd sesdev destroy --non-interactive ses5-1node
     run_cmd sesdev create ses5 --non-interactive --roles 
"[master,client,openattic],[storage,mon,mgr,rgw],[storage,mon,mgr,mds,nfs],[storage,mon,mgr,mds,rgw,nfs]"
 ses5-4node
     run_cmd sesdev qa-test ses5-4node
+    run_cmd sesdev supportconfig ses5-4node node1
+    rm -f scc*
     # consider uncommenting after the following bugs are fixed:
     # - https://github.com/SUSE/sesdev/issues/276
     # - https://github.com/SUSE/sesdev/issues/291
@@ -205,6 +211,8 @@
     run_cmd sesdev destroy --non-interactive ses6-1node
     run_cmd sesdev create ses6 --non-interactive ses6-4node
     run_cmd sesdev qa-test ses6-4node
+    run_cmd sesdev supportconfig ses6-4node node1
+    rm -f scc*
     # consider uncommenting after the following bugs are fixed:
     # - https://github.com/SUSE/sesdev/issues/276
     # - https://github.com/SUSE/sesdev/issues/291
@@ -228,6 +236,8 @@
     run_cmd sesdev destroy --non-interactive ses7-1node
     run_cmd sesdev create ses7 --non-interactive $CEPH_SALT_FROM_SOURCE 
ses7-4node
     run_cmd sesdev qa-test ses7-4node
+    run_cmd sesdev supportconfig ses7-4node node1
+    rm -f scc*
     # consider uncommenting after the following bugs are fixed:
     # - bsc#1170498
     # - https://github.com/SUSE/sesdev/issues/276
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/qa/common/common.sh 
new/sesdev-1.3.0+1590413709.g4ad4e03/qa/common/common.sh
--- old/sesdev-1.2.0+1588616857.gaa3df4c/qa/common/common.sh    2020-05-04 
20:27:36.899887709 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/qa/common/common.sh    2020-05-25 
15:35:09.522914969 +0200
@@ -359,6 +359,46 @@
     fi
 }
 
+function maybe_wait_for_rgws_test {
+    local expected_rgws="$1"
+    local actual_rgws
+    local minutes_to_wait
+    minutes_to_wait="5"
+    local minute
+    local i
+    local success
+    echo
+    echo "WWWW: maybe_wait_for_rgws_test"
+    if [ "$expected_rgws" ] ; then
+        echo "Waiting up to $minutes_to_wait minutes for all $expected_rgws 
RGW(s) to show up..."
+        for minute in $(seq 1 "$minutes_to_wait") ; do
+            for i in $(seq 1 4) ; do
+                set -x
+                actual_rgws="$(json_total_rgws)"
+                set +x
+                if [ "$actual_rgws" = "$expected_rgws" ] ; then
+                    success="not_empty"
+                    break 2
+                else
+                    _grace_period 15 "$i"
+                fi
+            done
+            echo "Minutes left to wait: $((minutes_to_wait - minute))"
+        done
+    else
+        success="not_empty"
+        echo "No RGWs expected: nothing to wait for."
+    fi
+    if [ "$success" ] ; then
+        echo "maybe_wait_for_rgws_test: OK"
+        echo
+    else
+        echo "maybe_wait_for_rgws_test: FAIL"
+        echo
+        false
+    fi
+}
+
 function mgr_is_available_test {
     echo
     echo "WWWW: mgr_is_available_test"
@@ -379,6 +419,10 @@
     metadata_mdss="$(json_metadata_mdss)"
     local metadata_osds
     metadata_osds="$(json_metadata_osds)"
+    if [ "$VERSION_ID" = "15.2" ] ; then
+        local metadata_rgws
+        metadata_rgws="$(json_metadata_rgws)"
+    fi
     set +x
     local success
     success="yes"
@@ -386,9 +430,11 @@
     local expected_mons
     local expected_mdss
     local expected_osds
+    local expected_rgws
     [ "$MGR_NODES" ] && expected_mgrs="$MGR_NODES"
     [ "$MON_NODES" ] && expected_mons="$MON_NODES"
     [ "$MDS_NODES" ] && expected_mdss="$MDS_NODES"
+    [ "$RGW_NODES" ] && expected_rgws="$RGW_NODES"
     [ "$OSDS" ]      && expected_osds="$OSDS"
     if [ "$expected_mons" ] ; then
         echo "MONs metadata/expected: $metadata_mons/$expected_mons"
@@ -396,7 +442,13 @@
     fi
     if [ "$expected_mgrs" ] ; then
         echo "MGRs metadata/expected: $metadata_mgrs/$expected_mgrs"
-        [ "$metadata_mgrs" = "$expected_mgrs" ] || success=""
+        if [ "$metadata_mgrs" = "$expected_mgrs" ] ; then
+            true  # normal success case
+        elif [ "$expected_mgrs" -gt "1" ] && [ "$metadata_mgrs" = 
"$((expected_mgrs + 1))" ] ; then
+            true  # workaround for https://tracker.ceph.com/issues/45093
+        else
+            success=""
+        fi
     fi
     if [ "$expected_mdss" ] ; then
         echo "MDSs metadata/expected: $metadata_mdss/$expected_mdss"
@@ -406,6 +458,12 @@
         echo "OSDs metadata/expected: $metadata_osds/$expected_osds"
         [ "$metadata_osds" = "$expected_osds" ] || success=""
     fi
+    if [ "$VERSION_ID" = "15.2" ] ; then
+        if [ "$expected_rgws" ] ; then
+            echo "RGWs metadata/expected: $metadata_rgws/$expected_rgws"
+            [ "$metadata_rgws" = "$expected_rgws" ] || success=""
+        fi
+    fi
     if [ "$success" ] ; then
         echo "number_of_daemons_expected_vs_metadata_test: OK"
         echo
@@ -429,6 +487,8 @@
     actual_mon_nodes="$(json_total_mons)"
     local actual_mds_nodes
     actual_mds_nodes="$(json_total_mdss)"
+    local actual_rgw_nodes
+    actual_rgw_nodes="$(json_total_rgws)"
     local actual_osd_nodes
     actual_osd_nodes="$(json_osd_nodes)"
     local actual_osds
@@ -440,12 +500,14 @@
     local expected_mgr_nodes
     local expected_mon_nodes
     local expected_mds_nodes
+    local expected_rgw_nodes
     local expected_osd_nodes
     local expected_osds
     [ "$TOTAL_NODES" ] && expected_total_nodes="$TOTAL_NODES"
     [ "$MGR_NODES" ]   && expected_mgr_nodes="$MGR_NODES"
     [ "$MON_NODES" ]   && expected_mon_nodes="$MON_NODES"
     [ "$MDS_NODES" ]   && expected_mds_nodes="$MDS_NODES"
+    [ "$RGW_NODES" ]   && expected_rgw_nodes="$RGW_NODES"
     [ "$OSD_NODES" ]   && expected_osd_nodes="$OSD_NODES"
     [ "$OSDS" ]        && expected_osds="$OSDS"
     if [ "$expected_total_nodes" ] ; then
@@ -464,6 +526,10 @@
         echo "MDS nodes actual/expected:    
$actual_mds_nodes/$expected_mds_nodes"
         [ "$actual_mds_nodes" = "$expected_mds_nodes" ] || success=""
     fi
+    if [ "$expected_rgw_nodes" ] ; then
+        echo "RGW nodes actual/expected:    
$actual_rgw_nodes/$expected_rgw_nodes"
+        [ "$actual_rgw_nodes" = "$expected_rgw_nodes" ] || success=""
+    fi
     if [ "$expected_osd_nodes" ] ; then
         echo "OSD nodes actual/expected:    
$actual_osd_nodes/$expected_osd_nodes"
         [ "$actual_osd_nodes" = "$expected_osd_nodes" ] || success=""
@@ -472,7 +538,6 @@
         echo "total OSDs actual/expected:   $actual_osds/$expected_osds"
         [ "$actual_osds" = "$expected_osds" ] || success=""
     fi
-#    echo "RGW nodes expected:     $RGW_NODES"
 #    echo "IGW nodes expected:     $IGW_NODES"
 #    echo "NFS nodes expected:     $NFS_NODES"
     if [ "$success" ] ; then
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/qa/common/json.sh 
new/sesdev-1.3.0+1590413709.g4ad4e03/qa/common/json.sh
--- old/sesdev-1.2.0+1588616857.gaa3df4c/qa/common/json.sh      2020-05-04 
20:27:36.899887709 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/qa/common/json.sh      2020-05-25 
15:35:09.522914969 +0200
@@ -106,6 +106,14 @@
     fi
 }
 
+function json_metadata_rgws {
+    ceph orch ps -f json-pretty | jq '[.[] | select(.daemon_type=="rgw" and 
.status_desc=="running")] | length'
+}
+
+function json_total_rgws {
+    ceph status -f json-pretty | jq '.servicemap.services.rgw.daemons | 
del(.summary) | length'
+}
+
 function json_metadata_osds {
     ceph osd metadata | jq -r '. | length'
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/qa/health-ok.sh 
new/sesdev-1.3.0+1590413709.g4ad4e03/qa/health-ok.sh
--- old/sesdev-1.2.0+1588616857.gaa3df4c/qa/health-ok.sh        2020-05-04 
20:27:36.899887709 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/qa/health-ok.sh        2020-05-25 
15:35:09.522914969 +0200
@@ -117,6 +117,7 @@
 mgr_is_available_test
 maybe_wait_for_osd_nodes_test "$OSD_NODES"  # it might take a long time for 
OSD nodes to show up
 maybe_wait_for_mdss_test "$MDS_NODES"  # it might take a long time for MDSs to 
be ready
+maybe_wait_for_rgws_test "$RGW_NODES"  # it might take a long time for RGWs to 
be ready
 number_of_daemons_expected_vs_metadata_test
 number_of_nodes_actual_vs_expected_test
 ceph_health_test
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/sesdev/__init__.py 
new/sesdev-1.3.0+1590413709.g4ad4e03/sesdev/__init__.py
--- old/sesdev-1.2.0+1588616857.gaa3df4c/sesdev/__init__.py     2020-05-04 
20:27:36.903887707 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/sesdev/__init__.py     2020-05-25 
15:35:09.526914968 +0200
@@ -70,16 +70,14 @@
                      help='Allows to stop deployment before creating ceph-salt 
configuration'),
         click.option('--stop-before-ceph-salt-apply', is_flag=True, 
default=False,
                      help='Allows to stop deployment before applying ceph-salt 
configuration'),
+        click.option('--stop-before-ceph-orch-apply', is_flag=True, 
default=False,
+                     help='Allows to stop deployment before applying ceph orch 
service spec'),
         click.option('--ceph-salt-repo', type=str, default=None,
                      help='ceph-salt Git repo URL'),
         click.option('--ceph-salt-branch', type=str, default=None,
                      help='ceph-salt Git branch'),
         click.option('--image-path', type=str, default=None,
                      help='registry path from which to download Ceph container 
image'),
-        click.option('--deploy-mons/--no-deploy-mons', default=True, 
help='Deploy Ceph MONs'),
-        click.option('--deploy-mgrs/--no-deploy-mgrs', default=True, 
help='Deploy Ceph MGRs'),
-        click.option('--deploy-osds/--no-deploy-osds', default=True, 
help='Deploy Ceph OSDs'),
-        click.option('--deploy-mdss/--no-deploy-mdss', default=True, 
help='Deploy Ceph MDSs'),
         click.option('--salt/--ceph-salt', default=False,
                      help='Use "salt" (instead of "ceph-salt") to run 
ceph-salt formula'),
     ]
@@ -148,7 +146,6 @@
                 role = role[1:-1]
                 if role:
                     _node.append(role)
-                    _node = list(set(_node))  # eliminate duplicate roles
                 _node.sort()
                 _roles.append(_node)
             else:
@@ -157,7 +154,6 @@
         elif role.endswith(']'):
             role = role[:-1]
             _node.append(role)
-            _node = list(set(_node))  # eliminate duplicate roles
             _node.sort()
             _roles.append(_node)
         else:
@@ -468,13 +464,10 @@
                        deepsea_branch=None,
                        ceph_salt_repo=None,
                        ceph_salt_branch=None,
-                       stop_before_ceph_salt_config=False,
-                       stop_before_ceph_salt_apply=False,
+                       stop_before_ceph_salt_config=None,
+                       stop_before_ceph_salt_apply=None,
+                       stop_before_ceph_orch_apply=None,
                        image_path=None,
-                       deploy_mons=None,
-                       deploy_mgrs=None,
-                       deploy_osds=None,
-                       deploy_mdss=None,
                        ceph_repo=None,
                        ceph_branch=None,
                        username=None,
@@ -613,6 +606,9 @@
     if stop_before_ceph_salt_apply is not None:
         settings_dict['stop_before_ceph_salt_apply'] = 
stop_before_ceph_salt_apply
 
+    if stop_before_ceph_orch_apply is not None:
+        settings_dict['stop_before_ceph_orch_apply'] = 
stop_before_ceph_orch_apply
+
     if image_path:
         settings_dict['image_path'] = image_path
 
@@ -634,18 +630,6 @@
     if stop_before_run_make_check is not None:
         settings_dict['makecheck_stop_before_run_make_check'] = 
stop_before_run_make_check
 
-    if deploy_mons is not None:
-        settings_dict['ceph_salt_deploy_mons'] = deploy_mons
-
-    if deploy_mgrs is not None:
-        settings_dict['ceph_salt_deploy_mgrs'] = deploy_mgrs
-
-    if deploy_osds is not None:
-        settings_dict['ceph_salt_deploy_osds'] = deploy_osds
-
-    if deploy_mdss is not None:
-        settings_dict['ceph_salt_deploy_mdss'] = deploy_mdss
-
     for folder in synced_folder:
         try:
             src, dst = folder.split(':')
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/sesdev.spec 
new/sesdev-1.3.0+1590413709.g4ad4e03/sesdev.spec
--- old/sesdev-1.2.0+1588616857.gaa3df4c/sesdev.spec    2020-05-04 
20:27:37.183887545 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/sesdev.spec    2020-05-25 
15:35:09.838914906 +0200
@@ -16,7 +16,7 @@
 #
 
 Name:           sesdev
-Version:        1.2.0+1588616857.gaa3df4c
+Version:        1.3.0+1590413709.g4ad4e03
 Release:        1%{?dist}
 Summary:        CLI tool to deploy and manage SES clusters
 License:        MIT
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/__init__.py 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/__init__.py
--- old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/__init__.py     2020-05-04 
20:27:36.903887707 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/__init__.py     2020-05-25 
15:35:09.526914968 +0200
@@ -20,7 +20,8 @@
                         ServiceNotFound, ExclusiveRoles, RoleNotKnown, 
RoleNotSupported, \
                         CmdException, VagrantSshConfigNoHostName, 
ScpInvalidSourceOrDestination, \
                         UniqueRoleViolation, SettingNotKnown, 
SupportconfigOnlyOnSLE, \
-                        NoPrometheusGrafanaInSES5, BadMakeCheckRolesNodes
+                        NoPrometheusGrafanaInSES5, BadMakeCheckRolesNodes, \
+                        DuplicateRolesNotSupported, NoSupportConfigTarballFound
 
 
 JINJA_ENV = Environment(loader=PackageLoader('seslib', 'templates'), 
trim_blocks=True)
@@ -142,10 +143,6 @@
                        'SLE-15-SP2-Product-WE-POOL-x86_64-Media1/',
         'workstation-update': 
'http://download.suse.de/ibs/SUSE/Updates/SLE-Product-WE/15-SP2/'
                               'x86_64/update/',
-        'containers': 
'http://download.suse.de/ibs/SUSE/Products/SLE-Module-Containers/15-SP2/'
-                      'x86_64/product/',
-        'containers-update': 
'http://download.suse.de/ibs/SUSE/Updates/SLE-Module-Containers/'
-                             '15-SP2/x86_64/update/',
         'storage7-media': 
'http://download.suse.de/ibs/SUSE:/SLE-15-SP2:/Update:/Products:/SES7/'
                           
'images/repo/SUSE-Enterprise-Storage-7-POOL-x86_64-Media1/',
     },
@@ -223,7 +220,7 @@
     'nautilus': NAUTILUS_DEFAULT_ROLES,
     'octopus': OCTOPUS_DEFAULT_ROLES,
     'pacific': OCTOPUS_DEFAULT_ROLES,
-    'caasp4': [["master"], ["worker"], ["loadbalancer"], ["storage"]],
+    'caasp4': [["master"], ["worker"], ["worker"], ["loadbalancer"]],
     'makecheck': [["makecheck"]],
 }
 
@@ -523,6 +520,11 @@
         'help': 'Stops deployment before ceph-salt apply',
         'default': False,
     },
+    'stop_before_ceph_orch_apply': {
+        'type': bool,
+        'help': 'Stops deployment before ceph orch apply',
+        'default': False,
+    },
     'image_path': {
         'type': str,
         'help': 'Container image path for Ceph daemons',
@@ -558,26 +560,6 @@
         'help': 'Stop before running run-make-check.sh (make check)',
         'default': False,
     },
-    'ceph_salt_deploy_mons': {
-        'type': bool,
-        'help': 'Tell ceph-salt to deploy Ceph MONs',
-        'default': True,
-    },
-    'ceph_salt_deploy_mgrs': {
-        'type': bool,
-        'help': 'Tell ceph-salt to deploy Ceph MGRs',
-        'default': True,
-    },
-    'ceph_salt_deploy_osds': {
-        'type': bool,
-        'help': 'Tell ceph-salt to deploy Ceph OSDs',
-        'default': True,
-    },
-    'ceph_salt_deploy_mdss': {
-        'type': bool,
-        'help': 'Tell ceph-salt to deploy Ceph MDSs',
-        'default': True,
-    },
     'use_salt': {
         'type': bool,
         'help': 'Use "salt" (or "salt-run") to apply Salt Formula (or execute 
DeepSea Stages)',
@@ -960,7 +942,7 @@
         self._generate_nodes()
 
     @property
-    def dep_dir(self):
+    def _dep_dir(self):
         return os.path.join(GlobalSettings.A_WORKING_DIR, self.dep_id)
 
     def _needs_cluster_network(self):
@@ -1014,9 +996,8 @@
     def _generate_nodes(self):
         node_id = 0
         worker_id = 0
-        storage_id = 0
         loadbl_id = 0
-        storage_id = 0
+        nfs_id = 0
         _log_debug("_generate_nodes: about to process cluster roles: {}"
                    .format(self.settings.roles))
         for node_roles in self.settings.roles:  # loop once for every node in 
cluster
@@ -1069,12 +1050,12 @@
                     name = 'loadbl{}'.format(loadbl_id)
                     fqdn = 'loadbl{}.{}'.format(loadbl_id,
                                                 
self.settings.domain.format(self.dep_id))
-                elif 'storage' in node_roles and self.settings.version == 
'caasp4':
-                    storage_id += 1
+                elif 'nfs' in node_roles and self.settings.version == 'caasp4':
+                    nfs_id += 1
                     node_id += 1
-                    name = 'storage{}'.format(storage_id)
-                    fqdn = 'storage{}.{}'.format(storage_id,
-                                                 
self.settings.domain.format(self.dep_id))
+                    name = 'nfs{}'.format(nfs_id)
+                    fqdn = 'nfs{}.{}'.format(nfs_id,
+                                             
self.settings.domain.format(self.dep_id))
                 else:
                     node_id += 1
                     name = 'node{}'.format(node_id)
@@ -1172,7 +1153,7 @@
 
             self.nodes[node.name] = node
 
-    def generate_vagrantfile(self):
+    def _generate_vagrantfile(self):
         vagrant_box = self.settings.os
 
         try:
@@ -1241,6 +1222,9 @@
             'deepsea_git_repo': self.settings.deepsea_git_repo,
             'deepsea_git_branch': self.settings.deepsea_git_branch,
             'version': self.settings.version,
+            'deploy_salt': bool(self.settings.version != 'makecheck' and
+                                self.settings.version != 'caasp4' and
+                                not self.suma),
             'stop_before_stage': self.settings.stop_before_stage,
             'deployment_tool': self.settings.deployment_tool,
             'version_repos_prio': version_repos_prio,
@@ -1267,11 +1251,8 @@
             'ceph_salt_fetch_github_pr_merges': 
ceph_salt_fetch_github_pr_merges,
             'stop_before_ceph_salt_config': 
self.settings.stop_before_ceph_salt_config,
             'stop_before_ceph_salt_apply': 
self.settings.stop_before_ceph_salt_apply,
+            'stop_before_ceph_orch_apply': 
self.settings.stop_before_ceph_orch_apply,
             'image_path': self.settings.image_path,
-            'ceph_salt_deploy_mons': self.settings.ceph_salt_deploy_mons,
-            'ceph_salt_deploy_mgrs': self.settings.ceph_salt_deploy_mgrs,
-            'ceph_salt_deploy_osds': self.settings.ceph_salt_deploy_osds,
-            'ceph_salt_deploy_mdss': self.settings.ceph_salt_deploy_mdss,
             'use_salt': self.settings.use_salt,
             'node_manager': NodeManager(list(self.nodes.values())),
             'caasp_deploy_ses': self.settings.caasp_deploy_ses,
@@ -1298,13 +1279,13 @@
         return scripts
 
     def save(self):
-        scripts = self.generate_vagrantfile()
+        scripts = self._generate_vagrantfile()
         key = RSA.generate(2048)
         private_key = key.exportKey('PEM')
         public_key = key.publickey().exportKey('OpenSSH')
 
-        os.makedirs(self.dep_dir, exist_ok=False)
-        metadata_file = os.path.join(self.dep_dir, METADATA_FILENAME)
+        os.makedirs(self._dep_dir, exist_ok=False)
+        metadata_file = os.path.join(self._dep_dir, METADATA_FILENAME)
         with open(metadata_file, 'w') as file:
             json.dump({
                 'id': self.dep_id,
@@ -1312,12 +1293,12 @@
             }, file, cls=SettingsEncoder)
 
         for filename, script in scripts.items():
-            full_path = os.path.join(self.dep_dir, filename)
+            full_path = os.path.join(self._dep_dir, filename)
             with open(full_path, 'w') as file:
                 file.write(script)
 
         # generate ssh key pair
-        keys_dir = os.path.join(self.dep_dir, 'keys')
+        keys_dir = os.path.join(self._dep_dir, 'keys')
         os.makedirs(keys_dir)
 
         with open(os.path.join(keys_dir, GlobalSettings.SSH_KEY_NAME), 'w') as 
file:
@@ -1329,10 +1310,10 @@
         os.chmod(os.path.join(keys_dir, str(GlobalSettings.SSH_KEY_NAME + 
'.pub')), 0o600)
 
         # bin dir with helper scripts
-        bin_dir = os.path.join(self.dep_dir, 'bin')
+        bin_dir = os.path.join(self._dep_dir, 'bin')
         os.makedirs(bin_dir)
 
-    def get_vagrant_box(self, log_handler):
+    def _get_vagrant_box(self, log_handler):
         if self.settings.vagrant_box:
             using_custom_box = True
             vagrant_box = self.settings.vagrant_box
@@ -1372,13 +1353,13 @@
             tools.run_async(["vagrant", "box", "add", "--provider", "libvirt", 
"--name",
                              self.settings.os, 
OS_BOX_MAPPING[self.settings.os]], log_handler)
 
-    def vagrant_up(self, node, log_handler):
+    def _vagrant_up(self, node, log_handler):
         cmd = ["vagrant", "up"]
         if node is not None:
             cmd.append(node)
         if GlobalSettings.VAGRANT_DEBUG:
             cmd.append('--debug')
-        tools.run_async(cmd, log_handler, self.dep_dir)
+        tools.run_async(cmd, log_handler, self._dep_dir)
 
     def destroy(self, log_handler, destroy_networks=False):
 
@@ -1397,9 +1378,9 @@
             cmd = ["vagrant", "destroy", node.name, "--force"]
             if GlobalSettings.VAGRANT_DEBUG:
                 cmd.append('--debug')
-            tools.run_async(cmd, log_handler, self.dep_dir)
+            tools.run_async(cmd, log_handler, self._dep_dir)
 
-        shutil.rmtree(self.dep_dir)
+        shutil.rmtree(self._dep_dir)
         # clean up any orphaned volumes
         images_to_remove = self.box.get_images_by_deployment(self.dep_id)
         if images_to_remove:
@@ -1447,19 +1428,19 @@
             raise NodeDoesNotExist(node)
 
         if self.settings.vm_engine == 'libvirt':
-            self.get_vagrant_box(log_handler)
-        self.vagrant_up(node, log_handler)
+            self._get_vagrant_box(log_handler)
+        self._vagrant_up(node, log_handler)
 
     def __str__(self):
         return self.dep_id
 
     def load_status(self):
-        if not os.path.exists(os.path.join(self.dep_dir, '.vagrant')):
+        if not os.path.exists(os.path.join(self._dep_dir, '.vagrant')):
             for node in self.nodes.values():
                 node.status = "not deployed"
             return
 
-        out = tools.run_sync(["vagrant", "status"], cwd=self.dep_dir)
+        out = tools.run_sync(["vagrant", "status"], cwd=self._dep_dir)
         for line in [line.strip() for line in out.split('\n')]:
             if line:
                 line_arr = line.split(' ', 1)
@@ -1554,12 +1535,17 @@
                 raise RoleNotSupported('worker', self.settings.version)
             if self.node_counts['loadbalancer'] > 0:
                 raise RoleNotSupported('loadbalancer', self.settings.version)
+        # no node may have more than one of any role
+        for node in self.settings.roles:
+            for role in KNOWN_ROLES:
+                if node.count(role) > 1:
+                    raise DuplicateRolesNotSupported(role)
 
     def _vagrant_ssh_config(self, name):
         if name not in self.nodes:
             raise NodeDoesNotExist(name)
 
-        out = tools.run_sync(["vagrant", "ssh-config", name], cwd=self.dep_dir)
+        out = tools.run_sync(["vagrant", "ssh-config", name], 
cwd=self._dep_dir)
 
         address = None
         proxycmd = None
@@ -1573,7 +1559,7 @@
         if address is None:
             raise VagrantSshConfigNoHostName(name)
 
-        dep_private_key = os.path.join(self.dep_dir, str("keys/" + 
GlobalSettings.SSH_KEY_NAME))
+        dep_private_key = os.path.join(self._dep_dir, str("keys/" + 
GlobalSettings.SSH_KEY_NAME))
 
         return (address, proxycmd, dep_private_key)
 
@@ -1591,7 +1577,7 @@
         return _cmd
 
     def ssh(self, name, command):
-        tools.run_interactive(self._ssh_cmd(name, command))
+        return tools.run_interactive(self._ssh_cmd(name, command))
 
     def _scp_cmd(self, source, destination, recurse=False):
         host_is_source = False
@@ -1657,15 +1643,26 @@
             ))
         self.ssh(name, ('supportconfig',))
         log_handler("=> Grabbing the resulting tarball from the cluster 
node\n")
-        self.scp(str(name) + ':/var/log/nts*', '.')
+        scc_exists = self.ssh(name, ('ls', '/var/log/scc*'))
+        nts_exists = self.ssh(name, ('ls', 'var/log/nts*'))
+        glob_to_get = None
+        if scc_exists == 0:
+            log_handler("Found /var/log/scc* (supportconfig) files on 
{}\n".format(name))
+            glob_to_get = 'scc*'
+        elif nts_exists == 0:
+            log_handler("Found /var/log/nts* (supportconfig) files on 
{}\n".format(name))
+            glob_to_get = 'nts*'
+        else:
+            raise NoSupportConfigTarballFound(name)
+        self.scp('{n}:/var/log/{g}'.format(n=name, g=glob_to_get), '.')
         log_handler("=> Deleting the tarball from the cluster node\n")
-        self.ssh(name, ('rm', '/var/log/nts*'))
+        self.ssh(name, ('rm', '/var/log/{}'.format(glob_to_get)))
 
     def qa_test(self, log_handler):
         tools.run_async(
             ["vagrant", "provision", "--provision-with", "qa-test"],
             log_handler,
-            self.dep_dir
+            self._dep_dir
             )
 
     def _find_service_node(self, service):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/exceptions.py 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/exceptions.py
--- old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/exceptions.py   2020-05-04 
20:27:36.903887707 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/exceptions.py   2020-05-25 
15:35:09.526914968 +0200
@@ -168,3 +168,17 @@
             "\"makecheck\". Since this is the default, you can simply omit "
             "the --roles option when running \"sesdev create makecheck\"."
             )
+
+
+class DuplicateRolesNotSupported(SesDevException):
+    def __init__(self, role):
+        super(DuplicateRolesNotSupported, self).__init__(
+            "A node with more than one \"{r}\" role was detected. "
+            "sesdev does not support more than one \"{r}\" role per 
node.".format(r=role)
+            )
+
+
+class NoSupportConfigTarballFound(SesDevException):
+    def __init__(self, node):
+        super(NoSupportConfigTarballFound, self).__init__(
+            "No supportconfig tarball found on node {}".format(node))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/master.sh.j2 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/master.sh.j2
--- old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/master.sh.j2    
2020-05-04 20:27:36.903887707 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/master.sh.j2    
2020-05-25 15:35:09.526914968 +0200
@@ -1,11 +1,11 @@
 
 zypper --non-interactive install --type pattern SUSE-CaaSP-Management
 
-{% if node.name == 'master1' %}
+{% if node.name == 'master' %}
 
-function wait_for_masters_ready {
-    printf "Waiting for masters to be ready"
-    until [[ $(kubectl get nodes 2>/dev/null | egrep -c "master[0-9]\s+Ready") 
-eq {{node_manager.get_by_role('master') | length}} ]]; do
+function wait_for_master_ready {
+    printf "Waiting for master to be ready"
+    until [[ $(kubectl get nodes 2>/dev/null | egrep -c "master\s+Ready") -eq 
1 ]]; do
          sleep 5
                 printf "."
     done
@@ -67,7 +67,7 @@
 skuba -v ${SKUBA_VERBOSITY} node join --role worker --user sles --sudo 
--target {{ _node.name }} {{ _node.name }}
 {% endfor %}
 
-wait_for_masters_ready
+wait_for_master_ready
 wait_for_workers_ready
 
 skuba -v ${SKUBA_VERBOSITY} cluster status
@@ -78,8 +78,10 @@
 kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin --serviceaccount=kube-system:tiller
 helm init --service-account=tiller --wait
 
+{% if nfs_nodes > 0 %}
 # adding nfs storage class
-helm install --name=nfs-client --set nfs.server=storage1 --set nfs.path=/nfs 
--set storageClass.defaultClass=true stable/nfs-client-provisioner
+helm install --name=nfs-client --set nfs.server=nfs1 --set nfs.path=/nfs --set 
storageClass.defaultClass=true stable/nfs-client-provisioner
+{% endif %}
 
 # Installing Kubernetes Dashboard
 kubectl apply -f 
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc2/aio/deploy/recommended.yaml
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/nfs.sh.j2 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/nfs.sh.j2
--- old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/nfs.sh.j2       
1970-01-01 01:00:00.000000000 +0100
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/nfs.sh.j2       
2020-05-25 15:35:09.526914968 +0200
@@ -0,0 +1,9 @@
+
+zypper --non-interactive install nfs-kernel-server
+
+mkdir /nfs
+echo '/nfs     *.{{ domain }}(rw,no_root_squash)' >/etc/exports
+systemctl enable --now nfs-server
+exportfs -a
+
+touch /tmp/ready
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/provision.sh.j2 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/provision.sh.j2
--- old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/provision.sh.j2 
2020-05-04 20:27:36.903887707 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/provision.sh.j2 
2020-05-25 15:35:09.526914968 +0200
@@ -21,8 +21,8 @@
 {% include "caasp/loadbalancer.sh.j2" %}
 {% endif %}
 
-{% if node.has_role('storage') %}
-{% include "caasp/storage.sh.j2" %}
+{% if node.has_role('nfs') %}
+{% include "caasp/nfs.sh.j2" %}
 {% endif %}
 
 {% if node.has_role('worker') %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/storage.sh.j2 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/storage.sh.j2
--- old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/caasp/storage.sh.j2   
2020-05-04 20:27:36.903887707 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/caasp/storage.sh.j2   
1970-01-01 01:00:00.000000000 +0100
@@ -1,9 +0,0 @@
-
-zypper --non-interactive install nfs-kernel-server
-
-mkdir /nfs
-echo '/nfs     *.{{ domain }}(rw,no_root_squash)' >/etc/exports
-systemctl enable --now nfs-server
-exportfs -a
-
-touch /tmp/ready
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2
 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2
--- 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2
  2020-05-04 20:27:36.903887707 +0200
+++ 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2
  2020-05-25 15:35:09.526914968 +0200
@@ -41,9 +41,11 @@
 MON_NODES_COMMA_SEPARATED_LIST=""
 MGR_NODES_COMMA_SEPARATED_LIST=""
 MDS_NODES_COMMA_SEPARATED_LIST=""
+RGW_NODES_SPACE_SEPARATED_LIST=""
 {% for node in nodes %}
 {% if node.has_roles() and not node.has_exclusive_role('client') %}
 ceph-salt config /ceph_cluster/minions add {{ node.fqdn }}
+ceph-salt config /ceph_cluster/roles/cephadm add {{ node.fqdn }}
 ceph-salt config /ceph_cluster/roles/admin add {{ node.fqdn }}
 {% endif %}
 {% if node.has_role('bootstrap') %}
@@ -58,10 +60,14 @@
 {% if node.has_role('mds') %}
 MDS_NODES_COMMA_SEPARATED_LIST+="{{ node.name }},"
 {% endif %}
+{% if node.has_role('rgw') %}
+RGW_NODES_SPACE_SEPARATED_LIST+="{{ node.name }} "
+{% endif %}
 {% endfor %}
 MON_NODES_COMMA_SEPARATED_LIST="${MON_NODES_COMMA_SEPARATED_LIST%,*}"
 MGR_NODES_COMMA_SEPARATED_LIST="${MGR_NODES_COMMA_SEPARATED_LIST%,*}"
 MDS_NODES_COMMA_SEPARATED_LIST="${MDS_NODES_COMMA_SEPARATED_LIST%,*}"
+RGW_NODES_SPACE_SEPARATED_LIST="${RGW_NODES_SPACE_SEPARATED_LIST%"${RGW_NODES_SPACE_SEPARATED_LIST##*[![:space:]]}"}"
 
 ceph-salt config /system_update/packages disable
 ceph-salt config /system_update/reboot disable
@@ -73,15 +79,14 @@
 {% set external_timeserver = "pool.ntp.org" %}
 ceph-salt config /time_server/external_servers add {{ external_timeserver }}
 
-{% if ceph_salt_deploy_osds %}
 {% if storage_nodes < 3 %}
 ceph-salt config /cephadm_bootstrap/ceph_conf add global
 ceph-salt config /cephadm_bootstrap/ceph_conf/global set "osd crush chooseleaf 
type" 0
 {% endif %}
-{% endif %} {# if ceph_salt_deploy_osds #}
 
 ceph-salt config /cephadm_bootstrap/dashboard/username set admin
 ceph-salt config /cephadm_bootstrap/dashboard/password set admin
+ceph-salt config /cephadm_bootstrap/dashboard/force_password_update disable
 
 ceph-salt config ls
 ceph-salt export --pretty
@@ -101,29 +106,34 @@
 stdbuf -o0 ceph-salt -ldebug apply --non-interactive
 {% endif %}
 
-{% if ceph_salt_deploy_mons %}
+{% if stop_before_ceph_orch_apply %}
+exit 0
+{% endif %}
+
 {% if mon_nodes > 1 %}
 ceph orch apply mon "$MON_NODES_COMMA_SEPARATED_LIST"
 {% endif %} {# mon_nodes > 1 #}
-{% endif %} {# ceph_salt_deploy_mons #}
 
-{% if ceph_salt_deploy_mgrs %}
 {% if mgr_nodes > 1 %}
 ceph orch apply mgr "$MGR_NODES_COMMA_SEPARATED_LIST"
 {% endif %} {# mgr_nodes > 1 #}
-{% endif %} {# ceph_salt_deploy_mgrs #}
 
-{% if ceph_salt_deploy_osds %}
 ceph orch device ls --refresh
 {% for node in nodes %}
 {% if node.has_role('storage') %}
 echo "{\"service_type\": \"osd\", \"placement\": {\"host_pattern\": \"{{ 
node.name }}*\"}, \"service_id\": \"testing_dg_{{ node.name }}\", 
\"data_devices\": {\"all\": True}}" | ceph orch apply osd -i -
 {% endif %}
 {% endfor %}
-{% endif %} {# if ceph_salt_deploy_osds #}
 
-{% if ceph_salt_deploy_mdss %}
+{% if mds_nodes > 0 %}
 ceph fs volume create myfs "$MDS_NODES_COMMA_SEPARATED_LIST"
-{% endif %} {# if ceph_salt_deploy_mdss #}
+{% endif %}
+
+{% if rgw_nodes > 0 %}
+radosgw-admin realm create --rgw-realm=default --default
+radosgw-admin zonegroup create --rgw-zonegroup=default> --master --default
+radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=default --master 
--default
+ceph orch apply rgw default default --placement="{{ rgw_nodes }} 
$RGW_NODES_SPACE_SEPARATED_LIST"
+{% endif %}
 
 {% include "qa_test.sh.j2" %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/makecheck/provision.sh.j2 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/makecheck/provision.sh.j2
--- 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/makecheck/provision.sh.j2 
    2020-05-04 20:27:36.903887707 +0200
+++ 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/makecheck/provision.sh.j2 
    2020-05-25 15:35:09.530914967 +0200
@@ -1,6 +1,19 @@
 
 set -ex
 
+# Compiling Ceph requires a number of "libfoo-devel" RPMs. These are typically
+# shipped in a different SLE Module than their corresponding library RPMs.
+# Sometimes, an update to a library RPM ("libfoo1") which is pre-installed 
+# in the Vagrant Box built in the IBS reaches the Vagrant Box before the
+# corresponding update to the "libfoo-devel" package reaches the IBS repos. 
This
+# state, which can last for days or even weeks, typically causes 
install-deps.sh
+# to fail on a zypper conflict.
+#
+# Possibly prophylactically downgrade libraries known to have caused this
+# problem in the past:
+#
+zypper --non-interactive install --force libudev1 || true
+
 useradd -m {{ makecheck_username }}
 echo "{{ makecheck_username }} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
 pam-config -a --nullok
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/provision.sh.j2 
new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/provision.sh.j2
--- old/sesdev-1.2.0+1588616857.gaa3df4c/seslib/templates/provision.sh.j2       
2020-05-04 20:27:36.903887707 +0200
+++ new/sesdev-1.3.0+1590413709.g4ad4e03/seslib/templates/provision.sh.j2       
2020-05-25 15:35:09.530914967 +0200
@@ -19,8 +19,13 @@
 # enable autorefresh on all zypper repos
 find /etc/zypp/repos.d -type f -exec sed -i -e 
's/^autorefresh=.*/autorefresh=1/' {} \;
 
-# remove "which" RPM because testing environments typically don't have it 
installed
+# remove RPMs that are often silently assumed to be present
+# removing such RPMs is desirable because these "implied" dependencies become
+# known, allowing them to be explicitly declared
 zypper --non-interactive remove which || true
+{% if version != 'ses5' %}
+zypper --non-interactive remove curl || true
+{% endif %}
 
 # remove Python 2 so it doesn't pollute the environment
 {% if os != 'sles-12-sp3' %}
@@ -114,13 +119,7 @@
 cat /root/.ssh/{{ ssh_key_name }}.pub >> /root/.ssh/authorized_keys
 hostnamectl set-hostname {{ node.name }}
 
-{% if version == 'caasp4' %}
-{% include "caasp/provision.sh.j2" %}
-{% endif %}
-
-{% if version == 'makecheck' %}
-{% include "makecheck/provision.sh.j2" %}
-{% elif not suma %}
+{% if deploy_salt %}
 zypper --non-interactive install salt-minion
 sed -i 's/^#master:.*/master: {{ master.name }}/g' /etc/salt/minion
 
@@ -134,10 +133,12 @@
 {% include "sync_clocks.sh.j2" %}
 {% endif %}
 
-{% endif %} {# not suma #}
+{% endif %} {# deploy_salt #}
 
 touch /tmp/ready
 
+{% if deploy_salt or suma %}
+
 {% if node == master or node == suma %}
 
 {% if node == master %}
@@ -200,3 +201,11 @@
 {% endif %}
 
 {% endif %} {# node == master or node == suma #}
+
+{% endif %} {# deploy_salt or suma #}
+
+{% if version == 'caasp4' %}
+{% include "caasp/provision.sh.j2" %}
+{% elif version == 'makecheck' %}
+{% include "makecheck/provision.sh.j2" %}
+{% endif %}


Reply via email to