Hello community, here is the log from the commit of package sesdev for openSUSE:Factory checked in at 2020-02-26 15:03:45 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/sesdev (Old) and /work/SRC/openSUSE:Factory/.sesdev.new.26092 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "sesdev" Wed Feb 26 15:03:45 2020 rev:3 rq:779390 version:1.1.5+1582717868.g68df753 Changes: -------- --- /work/SRC/openSUSE:Factory/sesdev/sesdev.changes 2020-02-18 10:43:34.761333060 +0100 +++ /work/SRC/openSUSE:Factory/.sesdev.new.26092/sesdev.changes 2020-02-26 15:03:52.413050773 +0100 @@ -1,0 +2,28 @@ +Wed Feb 26 11:51:32 UTC 2020 - Nathan Cutler <[email protected]> + +- Update to 1.1.5+1582717868.g68df753: + + upstream 1.1.5 release (2020-02-26) + * sesdev.spec: use standard ordering of sections + * sesdev: give the user a way to specify --no-deploy-... (PR #120) + * seslib: fix --no-deploy-mgrs option not working (PR #122) + +------------------------------------------------------------------- +Wed Feb 26 09:06:38 UTC 2020 - Nathan Cutler <[email protected]> + +- Update to 1.1.4+1582707984.gdb87191 + + upstream 1.1.4 release (2020-02-26) + * sesdev.spec: properly package /usr/share/sesdev directory + (follow-on fix for PR #112) + +------------------------------------------------------------------- +Tue Feb 25 14:05:06 UTC 2020 - Nathan Cutler <[email protected]> + +- Update to 1.1.3+1582639489.g0e91afa: + + upstream 1.1.3 release (2020-02-25) + * Rename ceph-bootstrap to ceph-salt (PR#114) + * Migrate ceph-bootstrap-qa to sesdev (part 2) (PR#112) + * provision: remove which RPM from test environment (PR#113) + * ceph_salt_deployment: disable system update and reboot (PR#117) + * seslib: by default, a mgr for every mon (PR#111) + +------------------------------------------------------------------- Old: ---- sesdev-1.1.2+1581962442.g190d64e.tar.gz New: ---- sesdev-1.1.5+1582717868.g68df753.tar.gz ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ sesdev.spec ++++++ --- /var/tmp/diff_new_pack.j0uGXN/_old 2020-02-26 15:03:55.253056440 +0100 +++ /var/tmp/diff_new_pack.j0uGXN/_new 2020-02-26 15:03:55.257056447 +0100 @@ -20,7 +20,7 @@ %endif Name: sesdev -Version: 1.1.2+1581962442.g190d64e +Version: 1.1.5+1582717868.g68df753 Release: 1%{?dist} Summary: CLI tool to deploy and manage SES clusters License: MIT @@ -64,6 +64,13 @@ versions of Ceph and SES, as well as, different versions of the openSUSE based OS. +%package qa +Summary: Integration test script for validating Ceph deployments + +%description qa +Integration test script for validating Ceph clusters deployed +by sesdev + %prep %autosetup -p1 %if 0%{?fedora} && 0%{?fedora} < 30 @@ -76,12 +83,24 @@ %install %py3_install %fdupes %{buildroot}%{python3_sitelib} +# qa script installation +install -m 0755 -d %{buildroot}/%{_datadir}/%{name}/qa +install -m 0755 -d %{buildroot}/%{_datadir}/%{name}/qa/common +install -m 0755 qa/health-ok.sh %{buildroot}/%{_datadir}/%{name}/qa/health-ok.sh +install -m 0644 qa/common/common.sh %{buildroot}/%{_datadir}/%{name}/qa/common/common.sh +install -m 0644 qa/common/helper.sh %{buildroot}/%{_datadir}/%{name}/qa/common/helper.sh +install -m 0644 qa/common/json.sh %{buildroot}/%{_datadir}/%{name}/qa/common/json.sh +install -m 0644 qa/common/zypper.sh %{buildroot}/%{_datadir}/%{name}/qa/common/zypper.sh %files %license LICENSE %doc CHANGELOG.md README.md %{python3_sitelib}/seslib*/ %{python3_sitelib}/sesdev*/ -%{_bindir}/sesdev +%{_bindir}/%{name} +%dir %{_datadir}/%{name} + +%files qa +%{_datadir}/%{name}/qa %changelog ++++++ sesdev-1.1.2+1581962442.g190d64e.tar.gz -> sesdev-1.1.5+1582717868.g68df753.tar.gz ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/CHANGELOG.md new/sesdev-1.1.5+1582717868.g68df753/CHANGELOG.md --- old/sesdev-1.1.2+1581962442.g190d64e/CHANGELOG.md 2020-02-17 19:00:42.492033652 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/CHANGELOG.md 2020-02-26 12:51:07.824981728 +0100 @@ -7,32 +7,54 @@ ## [Unreleased] +## [1.1.5] - 2020-02-26 + +### Fixed +- sesdev.spec: use standard ordering of sections +- sesdev: give the user a way to specify --no-deploy-... (PR #120) +- seslib: fix --no-deploy-mgrs option not working (PR #122) + +## [1.1.4] - 2020-02-26 + +### Fixed +- sesdev.spec: properly package /usr/share/sesdev directory + (follow-on fix for PR #112) + +## [1.1.3] - 2020-02-25 + +### Changed +- Rename ceph-bootstrap to ceph-salt (PR #114) +- Migrate ceph-bootstrap-qa to sesdev (part 2) (PR #112) +- provision: remove which RPM from test environment (PR #113) +- ceph_salt_deployment: disable system update and reboot (PR #117) +- seslib: by default, a mgr for every mon (PR #111) + ## [1.1.2] - 2020-02-17 ### Added -Implement "vagrant box list" and "vagrant box remove" (PR#69) -Allow user to specify custom private key file for remote libvirt (PR#71) -spec: add Fedora-specific Requires (PR#77) -Pillar is now automatically configured by ceph-bootstrap (PR#78) -Implement "sesdev scp" feature (PR#101) -Implement "sesdev create caasp4" feature (PR#103) - -### Fixed -Revamp --num-disks handling (PR#65) -Miscellaneous spec file cleanups and bugfixes (PR#72) -several fixes for octopus/ses7 deployment (PR#76) -Remove any orphaned images after destroy (PR#81) -seslib: fix Ceph repos for ses5, ses6, ses7 (PR#83) -tools/run_async: decode stderr bytes (PR#88) -libvirt/network: autostart networks per default (PR#93) -Fix NTP issue that was causing SES5 deployment to fail (PR#108) - -### Changed -ceph_bootstrap_deployment: "ceph-bootstrap -ldebug deploy" (PR#68) -Increase chances of getting the latest ses7 packages (PR#84) -ceph_bootstrap_deployment: log cephadm and ceph-bootstrap version (PR#86) -ceph_bootstrap: restart salt-master after ceph-bootstrap installation (PR#87) -seslib: add SES7 Internal Media when --qa-test given (PR#90) +- Implement "vagrant box list" and "vagrant box remove" (PR #69) +- Allow user to specify custom private key file for remote libvirt (PR #71) +- spec: add Fedora-specific Requires (PR #77) +- Pillar is now automatically configured by ceph-bootstrap (PR #78) +- Implement "sesdev scp" feature (PR #101) +- Implement "sesdev create caasp4" feature (PR #103) + +### Fixed +- Revamp --num-disks handling (PR #65) +- Miscellaneous spec file cleanups and bugfixes (PR #72) +- several fixes for octopus/ses7 deployment (PR #76) +- Remove any orphaned images after destroy (PR #81) +- seslib: fix Ceph repos for ses5, ses6, ses7 (PR #83) +- tools/run_async: decode stderr bytes (PR #88) +- libvirt/network: autostart networks per default (PR #93) +- Fix NTP issue that was causing SES5 deployment to fail (PR #108) + +### Changed +- ceph_bootstrap_deployment: "ceph-bootstrap -ldebug deploy" (PR #68) +- Increase chances of getting the latest ses7 packages (PR #84) +- ceph_bootstrap_deployment: log cephadm and ceph-bootstrap version (PR #86) +- ceph_bootstrap: restart salt-master after ceph-bootstrap installation (PR #87) +- seslib: add SES7 Internal Media when --qa-test given (PR #90) ## [1.1.1] - 2020-01-29 ### Added @@ -174,7 +196,10 @@ - Minimal README with a few usage instructions. - The CHANGELOG file. -[unreleased]: https://github.com/SUSE/sesdev/compare/v1.1.0...HEAD +[unreleased]: https://github.com/SUSE/sesdev/compare/v1.1.5...HEAD +[1.1.5]: https://github.com/SUSE/sesdev/releases/tag/v1.1.5 +[1.1.4]: https://github.com/SUSE/sesdev/releases/tag/v1.1.4 +[1.1.3]: https://github.com/SUSE/sesdev/releases/tag/v1.1.3 [1.1.2]: https://github.com/SUSE/sesdev/releases/tag/v1.1.2 [1.1.1]: https://github.com/SUSE/sesdev/releases/tag/v1.1.1 [1.1.0]: https://github.com/SUSE/sesdev/releases/tag/v1.1.0 diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/HOWTORELEASE.md new/sesdev-1.1.5+1582717868.g68df753/HOWTORELEASE.md --- old/sesdev-1.1.2+1581962442.g190d64e/HOWTORELEASE.md 2020-02-17 19:00:42.492033652 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/HOWTORELEASE.md 2020-02-26 12:51:07.824981728 +0100 @@ -3,16 +3,21 @@ These are the steps to make a release for version `<version_number>`: 1. Make sure you are working on the current tip of the master branch. -2. Update `CHANGELOG.md` with all important changes introduced since previous version. +2. Make sure the merged PRs of all important changes have the "Add To Changelog" label: + https://github.com/SUSE/sesdev/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Amerged+label%3A%22Add+To+Changelog%22+ +3. Update `CHANGELOG.md` with all important changes introduced since previous version. - Create a new section `[<version_number>] <date YYYY-MM-DD>` and move all entries from the `[Unreleased]` section to the new section. - Make sure all github issues resolved in this release are referenced in the changelog. - Update the links at the bottom of the file. -3. Update version number in `sesdev.spec` to `Version: <version_number>` -4. Create a commit with title `Bump to v<version_number>` containing the +4. Update version number in `sesdev.spec` to `Version: <version_number>` +5. Create a commit with title `Bump to v<version_number>` containing the modifications to `CHANGELOG.md` made in the previous two steps. -5. Create an annotated tag for the above commit: `git tag -s -a v<version_number>`. +6. Create an annotated tag for the above commit: `git tag -s -a v<version_number> -m"version <version_number>"`. - The message should be `version <version_number>`. - - Using `git show`, review the commit message of the annotated tag. + - Using `git show v<version_number>`, review the commit message of the annotated tag. It should say: `version <version_number>`. -6. Push commit and tag to github repo: `git push <remote> master --tags` +7. Push commit and tag to github repo: `git push <remote> master --tags` +8. Remove the "Add To Changelog" labels from all the merged PRs +9. Verify that no merged PRs have "Add To Changelog" label: + https://github.com/SUSE/sesdev/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Amerged+label%3A%22Add+To+Changelog%22+ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/README.md new/sesdev-1.1.5+1582717868.g68df753/README.md --- old/sesdev-1.1.2+1581962442.g190d64e/README.md 2020-02-17 19:00:42.492033652 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/README.md 2020-02-26 12:51:07.828981726 +0100 @@ -32,6 +32,11 @@ * [Services port-forwarding](#services-port-forwarding) * [Stopping a cluster](#stopping-a-cluster) * [Destroying a cluster](#destroying-a-cluster) +* [Common pitfalls](#common-pitfalls) + * [Domain about to create is already taken](#domain-about-to-create-is-already-taken) + * [Symptom](#symptom) + * [Analysis](#analysis) + * [Resolution](#resolution) ## Installation @@ -309,3 +314,41 @@ ``` $ sesdev destroy <deployment_id> ``` + +## Common pitfalls + +This section describes some common pitfalls and how to resolve them. + +### Domain about to create is already taken + +#### Symptom + +After deleting the `~/.sesdev` directory, `sesdev create` fails because +Vagrant throws an error message containing the words "domain about to create is +already taken". + +#### Analysis + +As described +[here](https://github.com/vagrant-libvirt/vagrant-libvirt/issues/658#issuecomment-335352340), +this typically occurs when the `~/.sesdev` directory is deleted. The libvirt +environment still has the domains, etc. whose metadata was deleted, and Vagrant +does not recognize the existing VM as one it created, even though the name is +identical. + +#### Resolution + +As described +[here](https://github.com/vagrant-libvirt/vagrant-libvirt/issues/658#issuecomment-380976825), +this can be resolved by manually deleting all the domains (VMs) and volumes +associated with the old deployment: + +``` +$ sudo virsh list --all +$ # see the names of the "offending" machines. For each, do: +$ sudo virsh destroy <THE_MACHINE> +$ sudo virsh undefine <THE_MACHINE> +$ sudo virsh vol-list default +$ # For each of the volumes associated with one of the deleted machines, do: +$ sudo virsh vol-delete --pool default <THE_VOLUME> +``` diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/qa/README.rst new/sesdev-1.1.5+1582717868.g68df753/qa/README.rst --- old/sesdev-1.1.2+1581962442.g190d64e/qa/README.rst 1970-01-01 01:00:00.000000000 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/qa/README.rst 2020-02-26 12:51:07.828981726 +0100 @@ -0,0 +1,46 @@ +health-ok.sh +============ + +Script for validating Ceph clusters deployed using DeepSea and/or ceph-salt. + + +Overview +-------- + +This bash script contains integration tests for validating fresh Ceph +deployments. + +The idea is to run this script with the appropriate arguments on the +Salt Master node after using DeepSea or ceph-salt to deploy a cluster. + +The script makes a number of assumptions, as listed under "Assumptions", below. + +On success (HEALTH_OK is reached, sanity tests pass), the script returns 0. +On failure, for whatever reason, the script returns non-zero. + +The script produces verbose output on stdout, which can be captured for later +forensic analysis. + +Though referred to as a bash script, ``health-ok.sh`` is only the entry point. +That file uses the bash internal ``source`` to run several helper scripts, which +are located in the ``common/`` subdirectory. + + +Assumptions +----------- + +The script makes the following assumptions: + +1. the script is being run on the Salt Master of a Ceph cluster deployed by + DeepSea or ceph-salt +2. the script is being run as root +3. the Ceph admin keyring is installed in the usual way, so the root user can + see and use the keyring + + +Caveats +------- + +The following caveats apply: + +1. Ceph will not work properly unless the nodes have (at least) short hostnames. That means the health-ok.sh script won't pass, either. There are two options: ``/etc/hosts`` or DNS diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/qa/common/common.sh new/sesdev-1.1.5+1582717868.g68df753/qa/common/common.sh --- old/sesdev-1.1.2+1581962442.g190d64e/qa/common/common.sh 1970-01-01 01:00:00.000000000 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/qa/common/common.sh 2020-02-26 12:51:07.828981726 +0100 @@ -0,0 +1,225 @@ +# +# This file is part of the sesdev-qa integration test suite +# + +set -e + +# BASEDIR is set by the calling script +source $BASEDIR/common/helper.sh +source $BASEDIR/common/json.sh +source $BASEDIR/common/zypper.sh + + +# +# functions that process command-line arguments +# + +function assert_enhanced_getopt { + set +e + echo -n "Running 'getopt --test'... " + getopt --test > /dev/null + if [ $? -ne 4 ]; then + echo "FAIL" + echo "This script requires enhanced getopt. Bailing out." + exit 1 + fi + echo "PASS" + set -e +} + + +# +# functions that print status information +# + +function cat_salt_config { + cat /etc/salt/master + cat /etc/salt/minion +} + +function salt_pillar_items { + salt '*' pillar.items +} + +function salt_pillar_get_roles { + salt '*' pillar.get roles +} + +function salt_cmd_run_lsblk { + salt '*' cmd.run lsblk +} + +function cat_ceph_conf { + salt '*' cmd.run "cat /etc/ceph/ceph.conf" 2>/dev/null +} + +function admin_auth_status { + ceph auth get client.admin + ls -l /etc/ceph/ceph.client.admin.keyring + cat /etc/ceph/ceph.client.admin.keyring +} + +function number_of_hosts_in_ceph_osd_tree { + ceph osd tree -f json-pretty | jq '[.nodes[] | select(.type == "host")] | length' +} + +function number_of_osds_in_ceph_osd_tree { + ceph osd tree -f json-pretty | jq '[.nodes[] | select(.type == "osd")] | length' +} + +function ceph_cluster_status { + ceph pg stat -f json-pretty + _grace_period 1 + ceph health detail -f json-pretty + _grace_period 1 + ceph osd tree + _grace_period 1 + ceph osd pool ls detail -f json-pretty + _grace_period 1 + ceph -s +} + +function ceph_log_grep_enoent_eaccess { + set +e + grep -rH "Permission denied" /var/log/ceph + grep -rH "No such file or directory" /var/log/ceph + set -e +} + + +# +# core validation tests +# + +function support_cop_out_test { + set +x + local supported="sesdev-qa supports this OS" + local not_supported="ERROR: sesdev-qa does not currently support this OS" + echo + echo "WWWW: ceph_version_test" + echo "Detected operating system $NAME $VERSION_ID" + case "$ID" in + opensuse*|suse|sles) + case "$VERSION_ID" in + 15*) + echo "$supported" + ;; + *) + echo "$not_supported" + ;; + esac + ;; + *) + echo "$not_supported" + false + ;; + esac + set +x + echo "support_cop_out_test: OK" + echo +} + +function ceph_version_test { +# test that ceph RPM version matches "ceph --version" +# for a loose definition of "matches" + echo + echo "WWWW: ceph_version_test" + set -x + rpm -q ceph-common + set +x + local RPM_NAME=$(rpm -q ceph-common) + local RPM_CEPH_VERSION=$(perl -e '"'"$RPM_NAME"'" =~ m/ceph-common-(\d+\.\d+\.\d+)/; print "$1\n";') + echo "According to RPM, the ceph upstream version is ->$RPM_CEPH_VERSION<-" + test -n "$RPM_CEPH_VERSION" + set -x + ceph --version + set +x + local BUFFER=$(ceph --version) + local CEPH_CEPH_VERSION=$(perl -e '"'"$BUFFER"'" =~ m/ceph version (\d+\.\d+\.\d+)/; print "$1\n";') + echo "According to \"ceph --version\", the ceph upstream version is ->$CEPH_CEPH_VERSION<-" + test -n "$RPM_CEPH_VERSION" + set -x + test "$RPM_CEPH_VERSION" = "$CEPH_CEPH_VERSION" + set +x + echo "ceph_version_test: OK" + echo +} + +function ceph_cluster_running_test { + echo + echo "WWWW: ceph_cluster_running_test" + _ceph_cluster_running + echo "ceph_cluster_running_test: OK" + echo +} + +function ceph_health_test { +# wait for up to some minutes for cluster to reach HEALTH_OK + echo + echo "WWWW: ceph_health_test" + local minutes_to_wait="5" + local cluster_status="" + for minute in $(seq 1 "$minutes_to_wait") ; do + for i in $(seq 1 4) ; do + set -x + ceph status + cluster_status="$(ceph health detail --format json | jq -r .status)" + set +x + if [ "$cluster_status" = "HEALTH_OK" ] ; then + break 2 + else + _grace_period 15 + fi + done + echo "Minutes left to wait: $((minutes_to_wait - minute))" + done + if [ "$cluster_status" != "HEALTH_OK" ] ; then + echo "Failed to reach HEALTH_OK even after waiting for $minutes_to_wait minutes" + exit 1 + fi + echo "ceph_health_test: OK" + echo +} + +function number_of_nodes_actual_vs_expected_test { + echo + echo "WWWW: number_of_nodes_actual_vs_expected_test" + set -x + local actual_total_nodes="$(json_total_nodes)" + local actual_mgr_nodes="$(json_total_mgrs)" + local actual_mon_nodes="$(json_total_mons)" + local actual_osd_nodes="$(json_osd_nodes)" + local actual_osds="$(json_total_osds)" + set +x + local all_green="yes" + local expected_total_nodes="" + local expected_mgr_nodes="" + local expected_mon_nodes="" + local expected_osd_nodes="" + local expected_osds="" + [ -z "$TOTAL_NODES" ] && expected_total_nodes="$actual_total_nodes" || expected_total_nodes="$TOTAL_NODES" + [ -z "$MGR_NODES" ] && expected_mgr_nodes="$actual_mgr_nodes" || expected_mgr_nodes="$MGR_NODES" + [ -z "$MON_NODES" ] && expected_mon_nodes="$actual_mon_nodes" || expected_mon_nodes="$MON_NODES" + [ -z "$OSD_NODES" ] && expected_osd_nodes="$actual_osd_nodes" || expected_osd_nodes="$OSD_NODES" + [ -z "$OSDS" ] && expected_osds="$actual_osds" || expected_osds="$OSDS" + echo "total nodes actual/expected: $actual_total_nodes/$expected_total_nodes" + [ "$actual_mon_nodes" = "$expected_mon_nodes" ] || all_green="" + echo "MON nodes actual/expected: $actual_mon_nodes/$expected_mon_nodes" + [ "$actual_mon_nodes" = "$expected_mon_nodes" ] || all_green="" + echo "MGR nodes actual/expected: $actual_mgr_nodes/$expected_mgr_nodes" + [ "$actual_mgr_nodes" = "$expected_mgr_nodes" ] || all_green="" + echo "OSD nodes actual/expected: $actual_osd_nodes/$expected_osd_nodes" + [ "$actual_osd_nodes" = "$expected_osd_nodes" ] || all_green="" + echo "total OSDs actual/expected: $actual_osds/$expected_osds" + [ "$actual_osds" = "$expected_osds" ] || all_green="" +# echo "MDS nodes expected: $MDS_NODES" +# echo "RGW nodes expected: $RGW_NODES" +# echo "IGW nodes expected: $IGW_NODES" +# echo "NFS-Ganesha expected: $NFS_GANESHA_NODES" + if [ ! "$all_green" ] ; then + echo "Actual number of nodes/node types/OSDs differs from expected number" + exit 1 + fi + echo "number_of_nodes_actual_vs_expected_test: OK" + echo +} diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/qa/common/helper.sh new/sesdev-1.1.5+1582717868.g68df753/qa/common/helper.sh --- old/sesdev-1.1.2+1581962442.g190d64e/qa/common/helper.sh 1970-01-01 01:00:00.000000000 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/qa/common/helper.sh 2020-02-26 12:51:07.828981726 +0100 @@ -0,0 +1,41 @@ +# This file is part of the sesdev-qa integration test suite + +set -e + +# +# helper functions (not to be called directly from test scripts) +# + +function _ceph_cluster_running { + set -x + ceph status + set +x +} + +function _copy_file_from_minion_to_master { + local MINION="$1" + local FULL_PATH="$2" + salt --static --out json "$MINION" cmd.shell "cat $FULL_PATH" | jq -r \.\"$MINION\" > $FULL_PATH +} + +function _first_x_node { + local ROLE=$1 + salt --static --out json -G "ceph-salt:roles:$ROLE" test.true 2>/dev/null | jq -r 'keys[0]' +} + +function _grace_period { + local SECONDS=$1 + echo "${SECONDS}-second grace period" + sleep $SECONDS +} + +function _ping_minions_until_all_respond { + local RESPONDING="" + for i in {1..20} ; do + sleep 10 + RESPONDING=$(salt '*' test.ping 2>/dev/null | grep True 2>/dev/null | wc --lines) + echo "Of $TOTAL_NODES total minions, $RESPONDING are responding" + test "$TOTAL_NODES" -eq "$RESPONDING" && break + done +} + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/qa/common/json.sh new/sesdev-1.1.5+1582717868.g68df753/qa/common/json.sh --- old/sesdev-1.1.2+1581962442.g190d64e/qa/common/json.sh 1970-01-01 01:00:00.000000000 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/qa/common/json.sh 2020-02-26 12:51:07.828981726 +0100 @@ -0,0 +1,27 @@ +# +# This file is part of the sesdev-qa integration test suite. +# It contains various cluster introspection functions. +# + +set -e + +function json_total_nodes { + salt --static --out json '*' test.ping 2>/dev/null | jq '. | length' +} + +function json_osd_nodes { + ceph osd tree -f json-pretty | \ + jq '[.nodes[] | select(.type == "host")] | length' +} + +function json_total_mgrs { + echo "$(($(ceph status --format json | jq -r .mgrmap.num_standbys) + 1))" +} + +function json_total_mons { + ceph status --format json | jq -r .monmap.num_mons +} + +function json_total_osds { + ceph osd ls --format json | jq '. | length' +} diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/qa/common/zypper.sh new/sesdev-1.1.5+1582717868.g68df753/qa/common/zypper.sh --- old/sesdev-1.1.2+1581962442.g190d64e/qa/common/zypper.sh 1970-01-01 01:00:00.000000000 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/qa/common/zypper.sh 2020-02-26 12:51:07.828981726 +0100 @@ -0,0 +1,28 @@ +# This file is part of the sesdev-qa integration test suite + +set -e + +# +# zypper-specific helper functions +# + +function _dump_salt_master_zypper_repos { + zypper lr -upEP +} + +function _zypper_ref_on_master { + set +x + for delay in 60 60 60 60 ; do + zypper --non-interactive --gpg-auto-import-keys refresh && break + sleep $delay + done + set -x +} + +function _zypper_install_on_master { + local PACKAGE=$1 + set -x + zypper --non-interactive install --no-recommends "$PACKAGE" + set +x +} + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/qa/health-ok.sh new/sesdev-1.1.5+1582717868.g68df753/qa/health-ok.sh --- old/sesdev-1.1.2+1581962442.g190d64e/qa/health-ok.sh 1970-01-01 01:00:00.000000000 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/qa/health-ok.sh 2020-02-26 12:51:07.828981726 +0100 @@ -0,0 +1,114 @@ +#!/bin/bash +# +# integration test automation script "health-ok.sh" +# + +set -e +trap 'catch $?' EXIT + +SCRIPTNAME=$(basename ${0}) +BASEDIR=$(readlink -f "$(dirname ${0})") +test -d $BASEDIR +# [[ $BASEDIR =~ \/sesdev-qa$ ]] + +source /etc/os-release +source $BASEDIR/common/common.sh + +function catch { + echo + echo -n "Overall result: " + if [ "$1" = "0" ] ; then + echo "OK" + else + echo "NOT_OK (error $2)" + fi +} + +function usage { + echo "$SCRIPTNAME - script for testing HEALTH_OK deployment" + echo "for use in SUSE Enterprise Storage testing" + echo + echo "Usage:" + echo " $SCRIPTNAME [-h,--help] [--igw=X] [--mds=X] [--mgr=X]" + echo " [--mon=X] [--nfs-ganesha=X] [--rgw=X]" + echo + echo "Options:" + echo " --help Display this usage message" + echo " --igw-nodes expected number of nodes with iSCSI Gateway" + echo " --mds-nodes expected number of nodes with MDS" + echo " --mgr-nodes expected number of nodes with MGR" + echo " --mon-nodes expected number of nodes with MON" + echo " --nfs-ganesha-nodes expected number of nodes with NFS-Ganesha" + echo " --osd-nodes expected number of nodes with OSD" + echo " --osds expected total number of OSDs in cluster" + echo " --rgw-nodes expected number of nodes with RGW" + echo + exit 1 +} + +assert_enhanced_getopt + +TEMP=$(getopt -o h \ +--long "help,igw-nodes:,mds-nodes:,mgr-nodes:,mon-nodes:,nfs-ganesha-nodes:,osd-nodes:,osds:,rgw-nodes:,total-nodes:" \ +-n 'health-ok.sh' -- "$@") + +if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi +eval set -- "$TEMP" + +# process command-line options +IGW_NODES="" +MDS_NODES="" +MGR_NODES="" +MON_NODES="" +NFS_GANESHA_NODES="" +OSD_NODES="" +OSDS="" +RGW_NODES="" +TOTAL_NODES="" +while true ; do + case "$1" in + --igw-nodes) shift ; IGW_NODES="$1" ; shift ;; + --mds-nodes) shift ; MDS_NODES="$1" ; shift ;; + --mgr-nodes) shift ; MGR_NODES="$1" ; shift ;; + --mon-nodes) shift ; MON_NODES="$1" ; shift ;; + --nfs-ganesha-nodes) shift ; NFS_GANESHA_NODES="$1" ; shift ;; + --osd-nodes) shift ; OSD_NODES="$1" ; shift ;; + --osds) shift ; OSDS="$1" ; shift ;; + --rgw-nodes) shift ; RGW_NODES="$1" ; shift ;; + --total-nodes) shift ; TOTAL_NODES="$1" ; shift ;; + -h|--help) usage ;; # does not return + --) shift ; break ;; + *) echo "Internal error" ; exit 1 ;; + esac +done + +# make Salt Master be an "admin node" +_zypper_install_on_master ceph-common +ADMIN_KEYRING="/etc/ceph/ceph.client.admin.keyring" +CEPH_CONF="/etc/ceph/ceph.conf" +mkdir -p /etc/ceph +if [ -f "$ADMIN_KEYRING" -a -f "$CEPH_CONF" ] ; then + true +else + set -x + ARBITRARY_MON_NODE="$(_first_x_node mon)" + if [ ! -f "$ADMIN_KEYRING" ] ; then + _copy_file_from_minion_to_master "$ARBITRARY_MON_NODE" "$ADMIN_KEYRING" + chmod 0600 "$ADMIN_KEYRING" + fi + if [ ! -f "$CEPH_CONF" ] ; then + _copy_file_from_minion_to_master "$ARBITRARY_MON_NODE" "$CEPH_CONF" + fi + set +x +fi +set -x +test -f "$ADMIN_KEYRING" +test -f "$CEPH_CONF" +set +x + +# run tests +support_cop_out_test +ceph_version_test +ceph_cluster_running_test +ceph_health_test +number_of_nodes_actual_vs_expected_test diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/sesdev/__init__.py new/sesdev-1.1.5+1582717868.g68df753/sesdev/__init__.py --- old/sesdev-1.1.2+1581962442.g190d64e/sesdev/__init__.py 2020-02-17 19:00:42.492033652 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/sesdev/__init__.py 2020-02-26 12:51:07.828981726 +0100 @@ -11,6 +11,7 @@ def sesdev_main(): + seslib.GlobalSettings.init_path_to_qa(__file__) try: # pylint: disable=unexpected-keyword-arg cli(prog_name='sesdev') @@ -54,26 +55,26 @@ return _decorator_composer(click_options, func) -def ceph_bootstrap_options(func): +def ceph_salt_options(func): click_options = [ - click.option('--stop-before-ceph-bootstrap-config', is_flag=True, default=False, - help='Allows to stop deployment configuring the cluster with ceph-bootstrap'), - click.option('--stop-before-ceph-bootstrap-deploy', is_flag=True, default=False, - help='Allows to stop deployment deploying the cluster with ceph-bootstrap'), - click.option('--ceph-bootstrap-repo', type=str, default=None, - help='ceph-bootstrap Git repo URL'), - click.option('--ceph-bootstrap-branch', type=str, default=None, - help='ceph-bootstrap Git branch'), + click.option('--stop-before-ceph-salt-config', is_flag=True, default=False, + help='Allows to stop deployment configuring the cluster with ceph-salt'), + click.option('--stop-before-ceph-salt-deploy', is_flag=True, default=False, + help='Allows to stop deployment deploying the cluster with ceph-salt'), + click.option('--ceph-salt-repo', type=str, default=None, + help='ceph-salt Git repo URL'), + click.option('--ceph-salt-branch', type=str, default=None, + help='ceph-salt Git branch'), click.option('--ceph-container-image', type=str, default=None, help='container image path for Ceph daemons'), - click.option('--deploy-bootstrap', is_flag=True, default=True, + click.option('--deploy-bootstrap/--no-deploy-bootstrap', default=True, help='Run ceph-daemon bootstrap during deployment. ' '(If false all other --deploy-* options will be disabled)'), - click.option('--deploy-mons', is_flag=True, default=True, help='Deploy Ceph Mons'), - click.option('--deploy-mgrs', is_flag=True, default=True, help='Deploy Ceph Mgrs'), - click.option('--deploy-osds', is_flag=True, default=True, help='Deploy Ceph OSDs'), - click.option('--ceph-bootstrap-deploy/--no-ceph-bootstrap-deploy', default=True, - help='Use `ceph-bootstrap deploy` command to run ceph-salt formula'), + click.option('--deploy-mons/--no-deploy-mons', default=True, help='Deploy Ceph Mons'), + click.option('--deploy-mgrs/--no-deploy-mgrs', default=True, help='Deploy Ceph Mgrs'), + click.option('--deploy-osds/--no-deploy-osds', default=True, help='Deploy Ceph OSDs'), + click.option('--ceph-salt-deploy/--no-ceph-salt-deploy', default=True, + help='Use `ceph-salt deploy` command to run ceph-salt formula'), ] return _decorator_composer(click_options, func) @@ -384,16 +385,16 @@ stop_before_deepsea_stage=None, deepsea_repo=None, deepsea_branch=None, - ceph_bootstrap_repo=None, - ceph_bootstrap_branch=None, - stop_before_ceph_bootstrap_config=False, - stop_before_ceph_bootstrap_deploy=False, + ceph_salt_repo=None, + ceph_salt_branch=None, + stop_before_ceph_salt_config=False, + stop_before_ceph_salt_deploy=False, ceph_container_image=None, deploy_bootstrap=True, deploy_mons=True, deploy_mgrs=True, deploy_osds=True, - ceph_bootstrap_deploy=True): + ceph_salt_deploy=True): settings_dict = {} if not single_node and roles: @@ -487,38 +488,38 @@ if domain: settings_dict['domain'] = domain - if ceph_bootstrap_repo: - settings_dict['ceph_bootstrap_git_repo'] = ceph_bootstrap_repo + if ceph_salt_repo: + settings_dict['ceph_salt_git_repo'] = ceph_salt_repo - if ceph_bootstrap_branch: - settings_dict['ceph_bootstrap_git_branch'] = ceph_bootstrap_branch + if ceph_salt_branch: + settings_dict['ceph_salt_git_branch'] = ceph_salt_branch - if stop_before_ceph_bootstrap_config: - settings_dict['stop_before_ceph_bootstrap_config'] = stop_before_ceph_bootstrap_config + if stop_before_ceph_salt_config: + settings_dict['stop_before_ceph_salt_config'] = stop_before_ceph_salt_config - if stop_before_ceph_bootstrap_deploy: - settings_dict['stop_before_ceph_bootstrap_deploy'] = stop_before_ceph_bootstrap_deploy + if stop_before_ceph_salt_deploy: + settings_dict['stop_before_ceph_salt_deploy'] = stop_before_ceph_salt_deploy if ceph_container_image: settings_dict['ceph_container_image'] = ceph_container_image if not deploy_bootstrap: - settings_dict['ceph_bootstrap_deploy_bootstrap'] = False - settings_dict['ceph_bootstrap_deploy_mons'] = False - settings_dict['ceph_bootstrap_deploy_mgrs'] = False - settings_dict['ceph_bootstrap_deploy_osds'] = False + settings_dict['ceph_salt_deploy_bootstrap'] = False + settings_dict['ceph_salt_deploy_mons'] = False + settings_dict['ceph_salt_deploy_mgrs'] = False + settings_dict['ceph_salt_deploy_osds'] = False if not deploy_mons: - settings_dict['ceph_bootstrap_deploy_mons'] = False + settings_dict['ceph_salt_deploy_mons'] = False if not deploy_mgrs: - settings_dict['ceph_bootstrap_deploy_mons'] = False + settings_dict['ceph_salt_deploy_mgrs'] = False if not deploy_osds: - settings_dict['ceph_bootstrap_deploy_osds'] = False + settings_dict['ceph_salt_deploy_osds'] = False - if not ceph_bootstrap_deploy: - settings_dict['ceph_bootstrap_deploy'] = False + if not ceph_salt_deploy: + settings_dict['ceph_salt_deploy'] = False return settings_dict @@ -592,7 +593,7 @@ @click.argument('deployment_id') @common_create_options @deepsea_options -@ceph_bootstrap_options +@ceph_salt_options @libvirt_options @click.option("--use-deepsea/--use-orchestrator", default=False, help="Use deepsea to deploy SES7 instead of SSH orchestrator") @@ -624,7 +625,7 @@ @click.argument('deployment_id') @common_create_options @deepsea_options -@ceph_bootstrap_options +@ceph_salt_options @libvirt_options @click.option("--use-deepsea/--use-orchestrator", default=False, help="Use deepsea to deploy Ceph Octopus instead of SSH orchestrator") diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/sesdev.spec new/sesdev-1.1.5+1582717868.g68df753/sesdev.spec --- old/sesdev-1.1.2+1581962442.g190d64e/sesdev.spec 2020-02-17 19:00:42.776033547 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/sesdev.spec 2020-02-26 12:51:08.152981539 +0100 @@ -20,7 +20,7 @@ %endif Name: sesdev -Version: 1.1.2+1581962442.g190d64e +Version: 1.1.5+1582717868.g68df753 Release: 1%{?dist} Summary: CLI tool to deploy and manage SES clusters License: MIT @@ -64,6 +64,13 @@ versions of Ceph and SES, as well as, different versions of the openSUSE based OS. +%package qa +Summary: Integration test script for validating Ceph deployments + +%description qa +Integration test script for validating Ceph clusters deployed +by sesdev + %prep %autosetup -p1 %if 0%{?fedora} && 0%{?fedora} < 30 @@ -76,13 +83,25 @@ %install %py3_install %fdupes %{buildroot}%{python3_sitelib} +# qa script installation +install -m 0755 -d %{buildroot}/%{_datadir}/%{name}/qa +install -m 0755 -d %{buildroot}/%{_datadir}/%{name}/qa/common +install -m 0755 qa/health-ok.sh %{buildroot}/%{_datadir}/%{name}/qa/health-ok.sh +install -m 0644 qa/common/common.sh %{buildroot}/%{_datadir}/%{name}/qa/common/common.sh +install -m 0644 qa/common/helper.sh %{buildroot}/%{_datadir}/%{name}/qa/common/helper.sh +install -m 0644 qa/common/json.sh %{buildroot}/%{_datadir}/%{name}/qa/common/json.sh +install -m 0644 qa/common/zypper.sh %{buildroot}/%{_datadir}/%{name}/qa/common/zypper.sh %files %license LICENSE %doc CHANGELOG.md README.md %{python3_sitelib}/seslib*/ %{python3_sitelib}/sesdev*/ -%{_bindir}/sesdev +%{_bindir}/%{name} +%dir %{_datadir}/%{name} + +%files qa +%{_datadir}/%{name}/qa %changelog diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/seslib/__init__.py new/sesdev-1.1.5+1582717868.g68df753/seslib/__init__.py --- old/sesdev-1.1.2+1581962442.g190d64e/seslib/__init__.py 2020-02-17 19:00:42.492033652 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/seslib/__init__.py 2020-02-26 12:51:07.828981726 +0100 @@ -31,9 +31,14 @@ CONFIG_FILE = os.path.join(WORKING_DIR, 'config.yaml') @classmethod - def init(cls, working_dir): - cls.WORKING_DIR = working_dir - os.makedirs(cls.WORKING_DIR, exist_ok=True) + def init_path_to_qa(cls, full_path_to_sesdev_executable): + if full_path_to_sesdev_executable.startswith('/usr'): + cls.PATH_TO_QA = '/usr/share/sesdev-qa' + else: + cls.PATH_TO_QA = os.path.join( + os.path.dirname(full_path_to_sesdev_executable), + '../qa/' + ) OS_BOX_MAPPING = { @@ -259,7 +264,7 @@ 'default': [["admin", "client", "prometheus", "grafana", "openattic"], ["storage", "mon", "mgr", "rgw", "igw"], ["storage", "mon", "mgr", "mds", "igw", "ganesha"], - ["storage", "mon", "mds", "rgw", "ganesha"]] + ["storage", "mon", "mgr", "mds", "rgw", "ganesha"]] }, 'public_network': { 'type': str, @@ -326,24 +331,24 @@ 'help': 'SCC organization password', 'default': None }, - 'ceph_bootstrap_git_repo': { + 'ceph_salt_git_repo': { 'type': str, - 'help': 'If set, it will install ceph-bootstrap from this git repo', + 'help': 'If set, it will install ceph-salt from this git repo', 'default': None }, - 'ceph_bootstrap_git_branch': { + 'ceph_salt_git_branch': { 'type': str, - 'help': 'ceph-bootstrap git branch to use', + 'help': 'ceph-salt git branch to use', 'default': 'master' }, - 'stop_before_ceph_bootstrap_config': { + 'stop_before_ceph_salt_config': { 'type': bool, - 'help': 'Stops deployment before ceph-bootstrap config', + 'help': 'Stops deployment before ceph-salt config', 'default': False }, - 'stop_before_ceph_bootstrap_deploy': { + 'stop_before_ceph_salt_deploy': { 'type': bool, - 'help': 'Stops deployment before ceph-bootstrap deploy', + 'help': 'Stops deployment before ceph-salt deploy', 'default': False }, 'ceph_container_image': { @@ -351,29 +356,29 @@ 'help': 'Container image path for Ceph daemons', 'default': None }, - 'ceph_bootstrap_deploy_bootstrap': { + 'ceph_salt_deploy_bootstrap': { 'type': bool, - 'help': 'Enable deployment bootstrap (aka ceph-daemon bootstrap) in ceph-bootstrap', + 'help': 'Enable deployment bootstrap (aka ceph-daemon bootstrap) in ceph-salt', 'default': True }, - 'ceph_bootstrap_deploy_mons': { + 'ceph_salt_deploy_mons': { 'type': bool, - 'help': 'Enable deployment of Ceph Mons in ceph-bootstrap', + 'help': 'Enable deployment of Ceph Mons in ceph-salt', 'default': True }, - 'ceph_bootstrap_deploy_mgrs': { + 'ceph_salt_deploy_mgrs': { 'type': bool, - 'help': 'Enable deployment of Ceph Mgrs in ceph-bootstrap', + 'help': 'Enable deployment of Ceph Mgrs in ceph-salt', 'default': True }, - 'ceph_bootstrap_deploy_osds': { + 'ceph_salt_deploy_osds': { 'type': bool, - 'help': 'Enable deployment of Ceph OSDs in ceph-bootstrap', + 'help': 'Enable deployment of Ceph OSDs in ceph-salt', 'default': True }, - 'ceph_bootstrap_deploy': { + 'ceph_salt_deploy': { 'type': bool, - 'help': 'Use `ceph-bootstrap deploy` command to run ceph-salt formula', + 'help': 'Use `ceph-salt deploy` command to run ceph-salt formula', 'default': True }, 'caasp_deploy_ses': { @@ -859,6 +864,7 @@ os_base_repos = [] context = { + 'sesdev_path_to_qa': GlobalSettings.PATH_TO_QA, 'dep_id': self.dep_id, 'os': self.settings.os, 'vm_engine': self.settings.vm_engine, @@ -893,16 +899,16 @@ 'total_osds': self.settings.num_disks * self.node_counts["storage"], 'scc_username': self.settings.scc_username, 'scc_password': self.settings.scc_password, - 'ceph_bootstrap_git_repo': self.settings.ceph_bootstrap_git_repo, - 'ceph_bootstrap_git_branch': self.settings.ceph_bootstrap_git_branch, - 'stop_before_ceph_bootstrap_config': self.settings.stop_before_ceph_bootstrap_config, - 'stop_before_ceph_bootstrap_deploy': self.settings.stop_before_ceph_bootstrap_deploy, + 'ceph_salt_git_repo': self.settings.ceph_salt_git_repo, + 'ceph_salt_git_branch': self.settings.ceph_salt_git_branch, + 'stop_before_ceph_salt_config': self.settings.stop_before_ceph_salt_config, + 'stop_before_ceph_salt_deploy': self.settings.stop_before_ceph_salt_deploy, 'ceph_container_image': self.settings.ceph_container_image, - 'ceph_bootstrap_deploy_bootstrap': self.settings.ceph_bootstrap_deploy_bootstrap, - 'ceph_bootstrap_deploy_mons': self.settings.ceph_bootstrap_deploy_mons, - 'ceph_bootstrap_deploy_mgrs': self.settings.ceph_bootstrap_deploy_mgrs, - 'ceph_bootstrap_deploy_osds': self.settings.ceph_bootstrap_deploy_osds, - 'ceph_bootstrap_deploy': self.settings.ceph_bootstrap_deploy, + 'ceph_salt_deploy_bootstrap': self.settings.ceph_salt_deploy_bootstrap, + 'ceph_salt_deploy_mons': self.settings.ceph_salt_deploy_mons, + 'ceph_salt_deploy_mgrs': self.settings.ceph_salt_deploy_mgrs, + 'ceph_salt_deploy_osds': self.settings.ceph_salt_deploy_osds, + 'ceph_salt_deploy': self.settings.ceph_salt_deploy, 'node_manager': NodeManager(list(self.nodes.values())), 'caasp_deploy_ses': self.settings.caasp_deploy_ses, } diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/Vagrantfile.j2 new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/Vagrantfile.j2 --- old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/Vagrantfile.j2 2020-02-17 19:00:42.492033652 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/Vagrantfile.j2 2020-02-26 12:51:07.828981726 +0100 @@ -19,7 +19,10 @@ destination:".ssh/id_rsa.pub" {% if node == admin %} - node.vm.provision "file", source: "bin/", destination:"/home/vagrant/" + node.vm.provision "file", source: "bin/", destination: "/home/vagrant/" +{% if qa_test is defined and qa_test is sameas true %} + node.vm.provision "file", source: "{{ sesdev_path_to_qa }}", destination: "/home/vagrant/sesdev-qa" +{% endif %} {% endif %} node.vm.synced_folder ".", "/vagrant", disabled: true diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/ceph-bootstrap/ceph_bootstrap_deployment.sh.j2 new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/ceph-bootstrap/ceph_bootstrap_deployment.sh.j2 --- old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/ceph-bootstrap/ceph_bootstrap_deployment.sh.j2 2020-02-17 19:00:42.492033652 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/ceph-bootstrap/ceph_bootstrap_deployment.sh.j2 1970-01-01 01:00:00.000000000 +0100 @@ -1,119 +0,0 @@ - -set -ex - -{% if ceph_bootstrap_git_repo %} -# install ceph-bootstrap -cd /root -git clone {{ ceph_bootstrap_git_repo }} -cd ceph-bootstrap -zypper -n in autoconf gcc python3-devel python3-pip python3-curses -git checkout {{ ceph_bootstrap_git_branch }} -pip install . -# install ceph-salt-formula -cp -r ceph-salt-formula/salt/* /srv/salt/ -chown -R salt:salt /srv -{% else %} -# ceph-salt-formula is installed automatically as a dependency of ceph-bootstrap -zypper -n in ceph-bootstrap -{% if qa_test is defined and qa_test is sameas true %} -zypper -n in ceph-bootstrap-qa -{% endif %} -{% endif %} - -systemctl restart salt-master - -# make sure all minions are responding -set +ex -LOOP_COUNT="0" -while true ; do - set -x - sleep 5 - set +x - if [ "$LOOP_COUNT" -ge "20" ] ; then - echo "ERROR: minion(s) not responding to ping?" - exit 1 - fi - LOOP_COUNT="$((LOOP_COUNT + 1))" - set -x - MINIONS_RESPONDING="$(salt '*' test.ping | grep True | wc --lines)" - if [ "$MINIONS_RESPONDING" = "{{ nodes|length }}" ]; then - break - fi - set +x -done -set -ex - -salt '*' saltutil.pillar_refresh -salt '*' saltutil.sync_all - -sleep 2 - -{% if stop_before_ceph_bootstrap_config %} -exit 0 -{% endif %} - -{% for node in nodes %} -ceph-bootstrap config /Cluster/Minions add {{ node.fqdn }} -{% if node.has_role('mon') %} -ceph-bootstrap config /Cluster/Roles/Mon add {{ node.fqdn }} -{% endif %} -{% if node.has_role('mgr') %} -ceph-bootstrap config /Cluster/Roles/Mgr add {{ node.fqdn }} -{% endif %} -{% endfor %} - -ceph-bootstrap config /SSH/ generate -{% if ceph_container_image %} -ceph-bootstrap config /Containers/Images/ceph set {{ ceph_container_image }} -{% endif %} -ceph-bootstrap config /Time_Server/Server_Hostname set {{ admin.fqdn }} -ceph-bootstrap config /Time_Server/External_Servers add 0.pt.pool.ntp.org -{% if not ceph_bootstrap_deploy_bootstrap %} -ceph-bootstrap config /Deployment/Bootstrap disable -{% endif %} -{% if ceph_bootstrap_deploy_mons %} -ceph-bootstrap config /Deployment/Mon enable -{% endif %} -{% if ceph_bootstrap_deploy_mgrs %} -ceph-bootstrap config /Deployment/Mgr enable -{% endif %} - -{% if ceph_bootstrap_deploy_mons %} -ceph-bootstrap config /Deployment/OSD enable - -# OSDs drive groups spec for each node -{% for node in nodes %} -{% if node.has_role('storage') %} -ceph-bootstrap config /Storage/Drive_Groups add value="{\"testing_dg_{{ node.name }}\": {\"host_pattern\": \"{{ node.name }}*\", \"data_devices\": {\"all\": true}}}" -{% endif %} -{% endfor %} -{% endif %} {# if ceph_bootstrap_deploy_osds #} - -ceph-bootstrap config /Deployment/Dashboard/username set admin -ceph-bootstrap config /Deployment/Dashboard/password set admin - -ceph-bootstrap config ls - -zypper lr -upEP -zypper info cephadm | grep -E '(^Repo|^Version)' -ceph-bootstrap --version - -{% if stop_before_ceph_bootstrap_deploy %} -exit 0 -{% endif %} - -{% if ceph_bootstrap_deploy %} -stdbuf -o0 ceph-bootstrap -ldebug deploy --non-interactive -{% else %} -salt -G 'ceph-salt:member' state.apply ceph-salt -{% endif %} - -{% if qa_test is defined and qa_test is sameas true -%} -{%- if ceph_bootstrap_git_repo -%} -/root/ceph-bootstrap/ -{%- else -%} -/usr/share/ceph-bootstrap/ -{%- endif -%} -qa/health-ok.sh --total-nodes={{ nodes|length }} --nfs-ganesha-nodes={{ ganesha_nodes }} --igw-nodes={{ igw_nodes }} --mds-nodes={{ mds_nodes }} --mgr-nodes={{ mgr_nodes }} --mon-nodes={{ mon_nodes }} --osd-nodes={{ storage_nodes }} --osds={{ total_osds }} --rgw-nodes={{ rgw_nodes }} -{%- endif %} - diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2 new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2 --- old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2 1970-01-01 01:00:00.000000000 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/ceph-salt/ceph_salt_deployment.sh.j2 2020-02-26 12:51:07.832981723 +0100 @@ -0,0 +1,113 @@ + +set -ex + +{% if ceph_salt_git_repo %} +# install ceph-salt +cd /root +git clone {{ ceph_salt_git_repo }} +cd ceph-salt +zypper -n in autoconf gcc python3-devel python3-pip python3-curses +git checkout {{ ceph_salt_git_branch }} +pip install . +# install ceph-salt-formula +cp -r ceph-salt-formula/salt/* /srv/salt/ +chown -R salt:salt /srv +{% else %} +# ceph-salt-formula is installed automatically as a dependency of ceph-salt +zypper -n in ceph-salt +{% endif %} + +systemctl restart salt-master + +# make sure all minions are responding +set +ex +LOOP_COUNT="0" +while true ; do + set -x + sleep 5 + set +x + if [ "$LOOP_COUNT" -ge "20" ] ; then + echo "ERROR: minion(s) not responding to ping?" + exit 1 + fi + LOOP_COUNT="$((LOOP_COUNT + 1))" + set -x + MINIONS_RESPONDING="$(salt '*' test.ping | grep True | wc --lines)" + if [ "$MINIONS_RESPONDING" = "{{ nodes|length }}" ]; then + break + fi + set +x +done +set -ex + +salt '*' saltutil.pillar_refresh +salt '*' saltutil.sync_all + +sleep 2 + +{% if stop_before_ceph_salt_config %} +exit 0 +{% endif %} + +{% for node in nodes %} +ceph-salt config /Cluster/Minions add {{ node.fqdn }} +{% if node.has_role('mon') %} +ceph-salt config /Cluster/Roles/Mon add {{ node.fqdn }} +{% endif %} +{% if node.has_role('mgr') %} +ceph-salt config /Cluster/Roles/Mgr add {{ node.fqdn }} +{% endif %} +{% endfor %} + +ceph-salt config /System_Update/Packages disable +ceph-salt config /System_Update/Reboot disable +ceph-salt config /SSH/ generate +{% if ceph_container_image %} +ceph-salt config /Containers/Images/ceph set {{ ceph_container_image }} +{% endif %} +ceph-salt config /Time_Server/Server_Hostname set {{ admin.fqdn }} +ceph-salt config /Time_Server/External_Servers add 0.pt.pool.ntp.org +{% if not ceph_salt_deploy_bootstrap %} +ceph-salt config /Deployment/Bootstrap disable +{% endif %} +{% if ceph_salt_deploy_mons %} +ceph-salt config /Deployment/Mon enable +{% endif %} +{% if ceph_salt_deploy_mgrs %} +ceph-salt config /Deployment/Mgr enable +{% endif %} + +{% if ceph_salt_deploy_mons %} +ceph-salt config /Deployment/OSD enable + +# OSDs drive groups spec for each node +{% for node in nodes %} +{% if node.has_role('storage') %} +ceph-salt config /Storage/Drive_Groups add value="{\"testing_dg_{{ node.name }}\": {\"host_pattern\": \"{{ node.name }}*\", \"data_devices\": {\"all\": true}}}" +{% endif %} +{% endfor %} +{% endif %} {# if ceph_salt_deploy_osds #} + +ceph-salt config /Deployment/Dashboard/username set admin +ceph-salt config /Deployment/Dashboard/password set admin + +ceph-salt config ls + +zypper lr -upEP +zypper info cephadm | grep -E '(^Repo|^Version)' +ceph-salt --version + +{% if stop_before_ceph_salt_deploy %} +exit 0 +{% endif %} + +{% if ceph_salt_deploy %} +stdbuf -o0 ceph-salt -ldebug deploy --non-interactive +{% else %} +salt -G 'ceph-salt:member' state.apply ceph-salt +{% endif %} + +{% if qa_test is defined and qa_test is sameas true %} +/home/vagrant/sesdev-qa/health-ok.sh --total-nodes={{ nodes|length }} --nfs-ganesha-nodes={{ ganesha_nodes }} --igw-nodes={{ igw_nodes }} --mds-nodes={{ mds_nodes }} --mgr-nodes={{ mgr_nodes }} --mon-nodes={{ mon_nodes }} --osd-nodes={{ storage_nodes }} --osds={{ total_osds }} --rgw-nodes={{ rgw_nodes }} +{% endif %} + diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/provision.sh.j2 new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/provision.sh.j2 --- old/sesdev-1.1.2+1581962442.g190d64e/seslib/templates/provision.sh.j2 2020-02-17 19:00:42.496033651 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/seslib/templates/provision.sh.j2 2020-02-26 12:51:07.832981723 +0100 @@ -1,9 +1,18 @@ {% include "engine/" + vm_engine + "/vagrant.provision.sh.j2" ignore missing %} +# show the user what we are doing +set -x + +ls -lR /home/vagrant + # remove the first line introduced by vagrant head -1 /etc/hosts | grep -q '127.*' && sed -i '1d' /etc/hosts {% for _node in nodes %} + +# remove "which" RPM because testing environments typically don't have it installed +zypper -n rm which || true + {% if _node.public_address %} echo "{{ _node.public_address }} {{ _node.fqdn }} {{ _node.name }}" >> /etc/hosts {% endif %} @@ -138,7 +147,7 @@ {% endif %} {% if node == admin and deployment_tool == "orchestrator" %} -{% include "ceph-bootstrap/ceph_bootstrap_deployment.sh.j2" %} +{% include "ceph-salt/ceph_salt_deployment.sh.j2" %} {% endif %} {% endif %} {# node == admin or node == suma #} diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/sesdev-1.1.2+1581962442.g190d64e/setup.cfg new/sesdev-1.1.5+1582717868.g68df753/setup.cfg --- old/sesdev-1.1.2+1581962442.g190d64e/setup.cfg 2020-02-17 19:00:42.496033651 +0100 +++ new/sesdev-1.1.5+1582717868.g68df753/setup.cfg 2020-02-26 12:51:07.832981723 +0100 @@ -37,7 +37,7 @@ templates/*.j2 templates/deepsea/*.j2 templates/caasp/*.j2 - templates/ceph-bootstrap/*.j2 + templates/ceph-salt/*.j2 templates/suma/*.j2 templates/engine/libvirt/*.j2
