Repository: bigtop
Updated Branches:
  refs/heads/master bb705d768 -> b6c1446b0


BIGTOP-2617: refresh juju spark-processing bundle (closes #168)

Signed-off-by: Kevin W Monroe <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/bigtop/repo
Commit: http://git-wip-us.apache.org/repos/asf/bigtop/commit/b6c1446b
Tree: http://git-wip-us.apache.org/repos/asf/bigtop/tree/b6c1446b
Diff: http://git-wip-us.apache.org/repos/asf/bigtop/diff/b6c1446b

Branch: refs/heads/master
Commit: b6c1446b001d6a9e8953f036649bcb450b0585cb
Parents: bb705d7
Author: Kevin W Monroe <[email protected]>
Authored: Fri Oct 28 15:14:48 2016 +0000
Committer: Kevin W Monroe <[email protected]>
Committed: Sun Dec 4 20:17:54 2016 -0600

----------------------------------------------------------------------
 bigtop-deploy/juju/spark-processing/README.md   | 148 ++++++++++++-------
 .../juju/spark-processing/bundle-dev.yaml       |  46 +++---
 .../juju/spark-processing/bundle-local.yaml     |  46 +++---
 bigtop-deploy/juju/spark-processing/bundle.yaml |  48 +++---
 .../juju/spark-processing/tests/01-bundle.py    |  47 +++++-
 .../juju/spark-processing/tests/tests.yaml      |   5 +
 6 files changed, 206 insertions(+), 134 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/bigtop/blob/b6c1446b/bigtop-deploy/juju/spark-processing/README.md
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/README.md 
b/bigtop-deploy/juju/spark-processing/README.md
index 335566b..36cf029 100644
--- a/bigtop-deploy/juju/spark-processing/README.md
+++ b/bigtop-deploy/juju/spark-processing/README.md
@@ -17,17 +17,20 @@
 # Overview
 
 This bundle provides a complete deployment of
-[Apache Spark](https://spark.apache.org/) in standalone HA mode as provided
-by [Apache Bigtop](http://bigtop.apache.org/). Ganglia and rsyslog
+[Apache Spark][] in standalone HA mode as provided
+by [Apache Bigtop][]. Ganglia and rsyslog
 applications are included to monitor cluster health and syslog activity.
 
+[Apache Spark]: http://spark/apache.org/
+[Apache Bigtop]: http://bigtop.apache.org/
+
 ## Bundle Composition
 
-The applications that comprise this bundle are spread across 7 units as
+The applications that comprise this bundle are spread across 6 units as
 follows:
 
   * Spark (Master and Worker)
-    * 3 separate units
+    * 2 separate units
   * Zookeeper
     * 3 separate units
   * Ganglia (Web interface for monitoring cluster metrics)
@@ -42,61 +45,85 @@ demands.
 # Deploying
 
 A working Juju installation is assumed to be present. If Juju is not yet set
-up, please follow the
-[getting-started](https://jujucharms.com/docs/2.0/getting-started)
-instructions prior to deploying this bundle.
+up, please follow the [getting-started][] instructions prior to deploying this
+bundle.
+
+> **Note**: This bundle requires hardware resources that may exceed limits
+of Free-tier or Trial accounts on some clouds. To deploy to these
+environments, modify a local copy of [bundle.yaml][] with
+`zookeeper: num_units: 1` and `machines: 'X': constraints: mem=3G` as needed
+to satisfy account limits.
 
-Once ready, deploy this bundle with the `juju deploy` command:
+Deploy this bundle from the Juju charm store with the `juju deploy` command:
 
     juju deploy spark-processing
 
 > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, use [juju-quickstart](https://launchpad.net/juju-quickstart) with the
-following syntax: `juju quickstart spark-processing`.
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+spark-processing`.
+
+Alternatively, deploy a locally modified `bundle.yaml` with:
+
+    juju deploy /path/to/bundle.yaml
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+/path/to/bundle.yaml`.
 
 The charms in this bundle can also be built from their source layers in the
 [Bigtop charm repository][].  See the [Bigtop charm README][] for instructions
 on building and deploying these charms locally.
 
+## Network-Restricted Environments
+Charms can be deployed in environments with limited network access. To deploy
+in this environment, configure a Juju model with appropriate proxy and/or
+mirror options. See [Configuring Models][] for more information.
+
+[getting-started]: https://jujucharms.com/docs/stable/getting-started
+[bundle.yaml]: 
https://github.com/apache/bigtop/blob/master/bigtop-deploy/juju/spark-processing/bundle.yaml
+[juju-quickstart]: https://launchpad.net/juju-quickstart
 [Bigtop charm repository]: 
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
 [Bigtop charm README]: 
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/README.md
+[Configuring Models]: https://jujucharms.com/docs/stable/models-config
 
 
 # Verifying
 
 ## Status
-The applications that make up this bundle provide status messages to
-indicate when they are ready:
+The applications that make up this bundle provide status messages to indicate
+when they are ready:
 
     juju status
 
 This is particularly useful when combined with `watch` to track the on-going
 progress of the deployment:
 
-    watch -n 0.5 juju status
+    watch -n 2 juju status
 
 The message for each unit will provide information about that unit's state.
 Once they all indicate that they are ready, perform application smoke tests
 to verify that the bundle is working as expected.
 
 ## Smoke Test
-The spark charm provides a `smoke-test` action that can be used to verify the
-application is functioning as expected. Run it as follows:
+The spark and zookeeper charms provide a `smoke-test` action that can be used
+to verify the respective application is functioning as expected. Run these
+actions as follows:
 
     juju run-action spark/0 smoke-test
+    juju run-action zookeeper/0 smoke-test
 
 > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
-of Juju, the syntax is `juju action do spark/0 smoke-test`.
+of Juju, the syntax is `juju action do <application>/0 smoke-test`.
 
-You can watch the progress of the smoke test action with:
+Watch the progress of the smoke test actions with:
 
-    watch -n 0.5 juju show-action-status
+    watch -n 2 juju show-action-status
 
 > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
 of Juju, the syntax is `juju action status`.
 
-Eventually, the smoke test should settle to `status: completed`.  If
-it reports `status: failed`, Spark is not working as expected. Get
+Eventually, all of the actions should settle to `status: completed`.  If
+any report `status: failed`, that application is not working as expected. Get
 more information about the smoke-test action
 
     juju show-action-output <action-id>
@@ -104,29 +131,46 @@ more information about the smoke-test action
 > **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
 of Juju, the syntax is `juju action fetch <action-id>`.
 
+## Utilities
+Applications in this bundle include Zookeeper command line and Spark web
+utilities that can be used to verify information about the cluster.
 
-# Monitoring
+From the command line, show the list of Zookeeper nodes with the following:
 
-This bundle includes Ganglia for system-level monitoring of the spark units.
-Metrics are sent to a centralized ganglia unit for easy viewing in a browser.
-To view the ganglia web interface, first expose the service:
+    juju run --unit zookeeper/0 'echo "ls /" | /usr/lib/zookeeper/bin/zkCli.sh'
 
-    juju expose ganglia
+To access the Spark web console, find the `PUBLIC-ADDRESS` of the spark
+application and expose it:
 
-Now find the ganglia public IP address:
+    juju status spark
+    juju expose spark
+
+The web interface will be available at the following URL:
+
+    http://SPARK_PUBLIC_IP:8080
+
+
+# Monitoring
+
+This bundle includes Ganglia for system-level monitoring of the spark and
+zookeeper units. Metrics are sent to a centralized ganglia unit for easy
+viewing in a browser. To view the ganglia web interface, find the
+`PUBLIC-ADDRESS` of the Ganglia application and expose it:
 
     juju status ganglia
+    juju expose ganglia
 
-The ganglia web interface will be available at:
+The web interface will be available at:
 
     http://GANGLIA_PUBLIC_IP/ganglia
 
 
 # Logging
 
-This bundle includes rsyslog to collect syslog data from the spark unit. These
-logs are sent to a centralized rsyslog unit for easy syslog analysis. One
-method of viewing this log data is to simply cat syslog from the rsyslog unit:
+This bundle includes rsyslog to collect syslog data from the spark and
+zookeeper units. These logs are sent to a centralized rsyslog unit for easy
+syslog analysis. One method of viewing this log data is to simply cat syslog
+from the rsyslog unit:
 
     juju run --unit rsyslog/0 'sudo cat /var/log/syslog'
 
@@ -142,16 +186,22 @@ the performance of the Spark cluster. Each benchmark is 
an action that can be
 run with `juju run-action`:
 
     $ juju actions spark | grep Bench
+    connectedcomponent                Run the Spark Bench ConnectedComponent 
benchmark.
+    decisiontree                      Run the Spark Bench DecisionTree 
benchmark.
+    kmeans                            Run the Spark Bench KMeans benchmark.
+    linearregression                  Run the Spark Bench LinearRegression 
benchmark.
     logisticregression                Run the Spark Bench LogisticRegression 
benchmark.
     matrixfactorization               Run the Spark Bench MatrixFactorization 
benchmark.
     pagerank                          Run the Spark Bench PageRank benchmark.
+    pca                               Run the Spark Bench PCA benchmark.
+    pregeloperation                   Run the Spark Bench PregelOperation 
benchmark.
+    shortestpaths                     Run the Spark Bench ShortestPaths 
benchmark.
     sql                               Run the Spark Bench SQL benchmark.
-    streaming                         Run the Spark Bench Streaming benchmark.
+    stronglyconnectedcomponent        Run the Spark Bench 
StronglyConnectedComponent benchmark.
     svdplusplus                       Run the Spark Bench SVDPlusPlus 
benchmark.
     svm                               Run the Spark Bench SVM benchmark.
-    trianglecount                     Run the Spark Bench TriangleCount 
benchmark.
 
-    $ juju run-action spark/0 pagerank
+    $ juju run-action spark/0 svdplusplus
     Action queued with id: 339cec1f-e903-4ee7-85ca-876fb0c3d28e
 
     $ juju show-action-output 339cec1f-e903-4ee7-85ca-876fb0c3d28e
@@ -160,48 +210,40 @@ run with `juju run-action`:
         composite:
           direction: asc
           units: secs
-          value: ".982000"
+          value: "200.754000"
         raw: |
-          
PageRank,0,.982000,,,,PageRank-MLlibConfig,,,,,10,12,,200000,4.0,1.3,0.15
-        start: 2016-09-22T21:52:26Z
-        stop: 2016-09-22T21:52:33Z
+          
SVDPlusPlus,2016-11-02-03:08:26,200.754000,85.974071,.428255,0,SVDPlusPlus-MLlibConfig,,,,,10,,,50000,4.0,1.3,
+        start: 2016-11-02T03:08:26Z
+        stop: 2016-11-02T03:11:47Z
       results:
         duration:
           direction: asc
           units: secs
-          value: ".982000"
+          value: "200.754000"
         throughput:
           direction: desc
           units: x/sec
-          value: ""
+          value: ".428255"
     status: completed
     timing:
-      completed: 2016-09-22 21:52:36 +0000 UTC
-      enqueued: 2016-09-22 21:52:09 +0000 UTC
-      started: 2016-09-22 21:52:13 +0000 UTC
+      completed: 2016-11-02 03:11:48 +0000 UTC
+      enqueued: 2016-11-02 03:08:21 +0000 UTC
+      started: 2016-11-02 03:08:26 +0000 UTC
 
 
 # Scaling
 
-By default, three spark units are deployed. To increase the amount of spark
-workers, simply add more units. To add one unit:
+By default, three spark and three zookeeper units are deployed. Scaling these
+applications is as simple as adding more units. To add one unit:
 
     juju add-unit spark
+    juju add-unit zookeeper
 
 Multiple units may be added at once.  For example, add four more spark units:
 
     juju add-unit -n4 spark
 
 
-# Network-Restricted Environments
-
-Charms can be deployed in environments with limited network access. To deploy
-in this environment, configure a Juju model with appropriate
-proxy and/or mirror options. See
-[Configuring Models](https://jujucharms.com/docs/2.0/models-config) for more
-information.
-
-
 # Contact Information
 
 - <[email protected]>

http://git-wip-us.apache.org/repos/asf/bigtop/blob/b6c1446b/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/bundle-dev.yaml 
b/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
index aaaf514..c9689ec 100644
--- a/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
+++ b/bigtop-deploy/juju/spark-processing/bundle-dev.yaml
@@ -1,47 +1,46 @@
 services:
   spark:
     charm: "cs:~bigdata-dev/xenial/spark"
-    num_units: 3
+    num_units: 2
     annotations:
       gui-x: "500"
       gui-y: "0"
     to:
+      - "0"
       - "1"
-      - "2"
-      - "3"
   zookeeper:
-    charm: "cs:~charmers/trusty/zookeeper-1"
+    charm: "cs:xenial/zookeeper-10"
     num_units: 3
     annotations:
       gui-x: "500"
       gui-y: "400"
     to:
+      - "2"
+      - "3"
       - "4"
-      - "5"
-      - "6"
   ganglia:
-    charm: "cs:trusty/ganglia-2"
+    charm: "cs:~bigdata-dev/xenial/ganglia-5"
     num_units: 1
     annotations:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "7"
+      - "5"
   ganglia-node:
-    charm: "cs:~bigdata-dev/xenial/ganglia-node-2"
+    charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:trusty/rsyslog-10"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
     num_units: 1
     annotations:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "7"
+      - "5"
   rsyslog-forwarder-ha:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-2"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
       gui-x: "750"
       gui-y: "400"
@@ -49,28 +48,27 @@ series: xenial
 relations:
   - [spark, zookeeper]
   - ["ganglia-node:juju-info", "spark:juju-info"]
+  - ["ganglia-node:juju-info", "zookeeper:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
   - ["rsyslog-forwarder-ha:juju-info", "spark:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
+  "0":
+    constraints: "mem=7G root-disk=32G"
+    series: "xenial"
   "1":
-    constraints: "mem=7G"
+    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G"
+    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G"
+    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G"
-    series: "trusty"
+    constraints: "mem=3G root-disk=32G"
+    series: "xenial"
   "5":
     constraints: "mem=3G"
-    series: "trusty"
-  "6":
-    constraints: "mem=3G"
-    series: "trusty"
-  "7":
-    constraints: "mem=3G"
-    series: "trusty"
+    series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/b6c1446b/bigtop-deploy/juju/spark-processing/bundle-local.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/bundle-local.yaml 
b/bigtop-deploy/juju/spark-processing/bundle-local.yaml
index b8a4d31..90e51e7 100644
--- a/bigtop-deploy/juju/spark-processing/bundle-local.yaml
+++ b/bigtop-deploy/juju/spark-processing/bundle-local.yaml
@@ -1,47 +1,46 @@
 services:
   spark:
     charm: "/home/ubuntu/charms/xenial/spark"
-    num_units: 3
+    num_units: 2
     annotations:
       gui-x: "500"
       gui-y: "0"
     to:
+      - "0"
       - "1"
-      - "2"
-      - "3"
   zookeeper:
-    charm: "cs:~charmers/trusty/zookeeper-1"
+    charm: "cs:xenial/zookeeper-10"
     num_units: 3
     annotations:
       gui-x: "500"
       gui-y: "400"
     to:
+      - "2"
+      - "3"
       - "4"
-      - "5"
-      - "6"
   ganglia:
-    charm: "cs:trusty/ganglia-2"
+    charm: "cs:~bigdata-dev/xenial/ganglia-5"
     num_units: 1
     annotations:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "7"
+      - "5"
   ganglia-node:
-    charm: "cs:~bigdata-dev/xenial/ganglia-node-2"
+    charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:trusty/rsyslog-10"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
     num_units: 1
     annotations:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "7"
+      - "5"
   rsyslog-forwarder-ha:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-2"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
       gui-x: "750"
       gui-y: "400"
@@ -49,28 +48,27 @@ series: xenial
 relations:
   - [spark, zookeeper]
   - ["ganglia-node:juju-info", "spark:juju-info"]
+  - ["ganglia-node:juju-info", "zookeeper:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
   - ["rsyslog-forwarder-ha:juju-info", "spark:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
+  "0":
+    constraints: "mem=7G root-disk=32G"
+    series: "xenial"
   "1":
-    constraints: "mem=7G"
+    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G"
+    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G"
+    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G"
-    series: "trusty"
+    constraints: "mem=3G root-disk=32G"
+    series: "xenial"
   "5":
     constraints: "mem=3G"
-    series: "trusty"
-  "6":
-    constraints: "mem=3G"
-    series: "trusty"
-  "7":
-    constraints: "mem=3G"
-    series: "trusty"
+    series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/b6c1446b/bigtop-deploy/juju/spark-processing/bundle.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/bundle.yaml 
b/bigtop-deploy/juju/spark-processing/bundle.yaml
index d36ed43..c309b45 100644
--- a/bigtop-deploy/juju/spark-processing/bundle.yaml
+++ b/bigtop-deploy/juju/spark-processing/bundle.yaml
@@ -1,47 +1,46 @@
 services:
   spark:
-    charm: "cs:xenial/spark-2"
-    num_units: 3
+    charm: "cs:xenial/spark-15"
+    num_units: 2
     annotations:
       gui-x: "500"
       gui-y: "0"
     to:
+      - "0"
       - "1"
-      - "2"
-      - "3"
   zookeeper:
-    charm: "cs:~charmers/trusty/zookeeper-1"
+    charm: "cs:xenial/zookeeper-10"
     num_units: 3
     annotations:
       gui-x: "500"
       gui-y: "400"
     to:
+      - "2"
+      - "3"
       - "4"
-      - "5"
-      - "6"
   ganglia:
-    charm: "cs:trusty/ganglia-2"
+    charm: "cs:~bigdata-dev/xenial/ganglia-5"
     num_units: 1
     annotations:
       gui-x: "0"
       gui-y: "800"
     to:
-      - "7"
+      - "5"
   ganglia-node:
-    charm: "cs:~bigdata-dev/xenial/ganglia-node-2"
+    charm: "cs:~bigdata-dev/xenial/ganglia-node-6"
     annotations:
       gui-x: "250"
       gui-y: "400"
   rsyslog:
-    charm: "cs:trusty/rsyslog-10"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-6"
     num_units: 1
     annotations:
       gui-x: "1000"
       gui-y: "800"
     to:
-      - "7"
+      - "5"
   rsyslog-forwarder-ha:
-    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-2"
+    charm: "cs:~bigdata-dev/xenial/rsyslog-forwarder-ha-7"
     annotations:
       gui-x: "750"
       gui-y: "400"
@@ -49,28 +48,27 @@ series: xenial
 relations:
   - [spark, zookeeper]
   - ["ganglia-node:juju-info", "spark:juju-info"]
+  - ["ganglia-node:juju-info", "zookeeper:juju-info"]
   - ["ganglia:node", "ganglia-node:node"]
   - ["rsyslog-forwarder-ha:juju-info", "spark:juju-info"]
+  - ["rsyslog-forwarder-ha:juju-info", "zookeeper:juju-info"]
   - ["rsyslog:aggregator", "rsyslog-forwarder-ha:syslog"]
 machines:
+  "0":
+    constraints: "mem=7G root-disk=32G"
+    series: "xenial"
   "1":
-    constraints: "mem=7G"
+    constraints: "mem=7G root-disk=32G"
     series: "xenial"
   "2":
-    constraints: "mem=7G"
+    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "3":
-    constraints: "mem=7G"
+    constraints: "mem=3G root-disk=32G"
     series: "xenial"
   "4":
-    constraints: "mem=3G"
-    series: "trusty"
+    constraints: "mem=3G root-disk=32G"
+    series: "xenial"
   "5":
     constraints: "mem=3G"
-    series: "trusty"
-  "6":
-    constraints: "mem=3G"
-    series: "trusty"
-  "7":
-    constraints: "mem=3G"
-    series: "trusty"
+    series: "xenial"

http://git-wip-us.apache.org/repos/asf/bigtop/blob/b6c1446b/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/tests/01-bundle.py 
b/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
index 379778c..fbb4ebf 100755
--- a/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
+++ b/bigtop-deploy/juju/spark-processing/tests/01-bundle.py
@@ -15,10 +15,9 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+import amulet
 import os
 import unittest
-
-import amulet
 import yaml
 
 
@@ -31,22 +30,54 @@ class TestBundle(unittest.TestCase):
         with open(cls.bundle_file) as f:
             bun = f.read()
         bundle = yaml.safe_load(bun)
+
+        # NB: strip machine ('to') placement out. amulet loses our machine spec
+        # somewhere between yaml and json; without that spec, charms specifying
+        # machine placement will not deploy. This is ok for now because all
+        # charms in this bundle are using 'reset: false' so we'll already
+        # have our deployment just the way we want it by the time this test
+        # runs. However, it's bad. Remove once this is fixed:
+        #  https://github.com/juju/amulet/issues/148
+        for service, service_config in bundle['services'].items():
+            if 'to' in service_config:
+                del service_config['to']
+
         cls.d.load(bundle)
-        cls.d.setup(timeout=1800)
-        cls.d.sentry.wait_for_messages({'spark': 'ready (standalone - HA)'}, 
timeout=1800)
+        cls.d.setup(timeout=3600)
+        cls.d.sentry.wait_for_messages({'spark': 'ready (standalone - HA)'}, 
timeout=3600)
         cls.spark = cls.d.sentry['spark'][0]
+        cls.zookeeper = cls.d.sentry['zookeeper'][0]
 
     def test_components(self):
         """
         Confirm that all of the required components are up and running.
         """
-        spark, retcode = self.spark.run("pgrep -a java")
+        spark, rc = self.spark.run("pgrep -a java")
+        zk, rc = self.zookeeper.run("pgrep -a java")
 
-        assert 'spark' in spark, 'Spark should be running on spark'
+        assert 'Master' in spark, "Spark Master should be running"
+        assert 'QuorumPeerMain' in zk, "Zookeeper QuorumPeerMain should be 
running"
 
     def test_spark(self):
-        output, retcode = self.spark.run("su ubuntu -c 'bash -lc 
/home/ubuntu/sparkpi.sh 2>&1'")
-        assert 'Pi is roughly' in output, 'SparkPI test failed: %s' % output
+        """
+        Validates Spark with a simple sparkpi test.
+        """
+        uuid = self.spark.run_action('smoke-test')
+        result = self.d.action_fetch(uuid, timeout=600, full_output=True)
+        # action status=completed on success
+        if (result['status'] != "completed"):
+            self.fail('Spark smoke-test did not complete: %s' % result)
+
+    def test_zookeeper(self):
+        """
+        Validates Zookeeper using the Bigtop 'zookeeper' smoke test.
+        """
+        uuid = self.zookeeper.run_action('smoke-test')
+        # 'zookeeper' smoke takes a while (bigtop tests download lots of stuff)
+        result = self.d.action_fetch(uuid, timeout=1800, full_output=True)
+        # action status=completed on success
+        if (result['status'] != "completed"):
+            self.fail('Zookeeper smoke-test did not complete: %s' % result)
 
 
 if __name__ == '__main__':

http://git-wip-us.apache.org/repos/asf/bigtop/blob/b6c1446b/bigtop-deploy/juju/spark-processing/tests/tests.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/spark-processing/tests/tests.yaml 
b/bigtop-deploy/juju/spark-processing/tests/tests.yaml
index 1ec8b82..84f78d7 100644
--- a/bigtop-deploy/juju/spark-processing/tests/tests.yaml
+++ b/bigtop-deploy/juju/spark-processing/tests/tests.yaml
@@ -1,2 +1,7 @@
+reset: false
+deployment_timeout: 3600
+sources:
+  - 'ppa:juju/stable'
 packages:
   - amulet
+  - python3-yaml

Reply via email to