Repository: bigtop
Updated Branches:
  refs/heads/master 89d3ac480 -> 312c006e2


BIGTOP-2548: Refresh charms for Juju 2.0 and Xenial (closes #148)

Signed-off-by: Kevin W Monroe <kevin.mon...@canonical.com>


Project: http://git-wip-us.apache.org/repos/asf/bigtop/repo
Commit: http://git-wip-us.apache.org/repos/asf/bigtop/commit/312c006e
Tree: http://git-wip-us.apache.org/repos/asf/bigtop/tree/312c006e
Diff: http://git-wip-us.apache.org/repos/asf/bigtop/diff/312c006e

Branch: refs/heads/master
Commit: 312c006e227aa42e1ef3bce780a3258ec9b0299b
Parents: 89d3ac4
Author: Kevin W Monroe <kevin.mon...@canonical.com>
Authored: Wed Oct 5 17:43:32 2016 +0000
Committer: Kevin W Monroe <kevin.mon...@canonical.com>
Committed: Wed Oct 12 09:29:14 2016 -0500

----------------------------------------------------------------------
 bigtop-packages/src/charm/README.md             |  51 ++---
 .../hadoop/layer-hadoop-namenode/README.md      | 110 +++++++----
 .../hadoop/layer-hadoop-namenode/actions.yaml   |   2 +-
 .../layer-hadoop-namenode/actions/smoke-test    |   2 +-
 .../hadoop/layer-hadoop-namenode/layer.yaml     |   2 +-
 .../hadoop/layer-hadoop-namenode/metadata.yaml  |   6 +-
 .../layer-hadoop-namenode/reactive/namenode.py  |  11 +-
 .../tests/01-basic-deployment.py                |   2 +-
 .../charm/hadoop/layer-hadoop-plugin/README.md  | 100 ++++++----
 .../hadoop/layer-hadoop-plugin/actions.yaml     |   2 +
 .../layer-hadoop-plugin/actions/smoke-test      |  62 ++++++
 .../charm/hadoop/layer-hadoop-plugin/layer.yaml |  10 +-
 .../hadoop/layer-hadoop-plugin/metadata.yaml    |   4 +-
 .../reactive/apache_bigtop_plugin.py            |   1 +
 .../tests/01-basic-deployment.py                |   2 +-
 .../layer-hadoop-resourcemanager/README.md      | 196 +++++++++++--------
 .../layer-hadoop-resourcemanager/actions.yaml   |   3 +-
 .../actions/smoke-test                          |  82 +++-----
 .../layer-hadoop-resourcemanager/layer.yaml     |   2 +-
 .../layer-hadoop-resourcemanager/metadata.yaml  |   6 +-
 .../reactive/resourcemanager.py                 |  32 ++-
 .../tests/01-basic-deployment.py                |   2 +-
 .../charm/hadoop/layer-hadoop-slave/README.md   | 107 +++++-----
 .../hadoop/layer-hadoop-slave/actions.yaml      |   3 +
 .../layer-hadoop-slave/actions/smoke-test       |  49 +++++
 .../charm/hadoop/layer-hadoop-slave/layer.yaml  |   6 +-
 .../hadoop/layer-hadoop-slave/metadata.yaml     |   4 +-
 .../reactive/hadoop_status.py                   |   3 +-
 .../tests/01-basic-deployment.py                |   2 +-
 29 files changed, 545 insertions(+), 319 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/README.md 
b/bigtop-packages/src/charm/README.md
index 1290d4b..04213ad 100644
--- a/bigtop-packages/src/charm/README.md
+++ b/bigtop-packages/src/charm/README.md
@@ -18,37 +18,41 @@
 
 ## Overview
 
-These are the charm layers used to build Juju charms for deploying Bigtop
-components.  The charms are also published to the [Juju charm store][] and
-can be deployed directly from there using [bundles][], or they can be
-built from these layers and deployed locally.
+Juju Charms allow you to deploy, configure, and connect an Apache Bigtop 
cluster
+on any supported cloud, which can be scaled to meet workload demands. You can
+also easily connect other, non-Bigtop components from the [Juju charm store][]
+that support common interfaces.
 
-Charms allow you to deploy, configure, and connect a Apache Bigtop cluster
-on any supported cloud, which can be easily scaled to meet workload demands.
-You can also easily connect other, non-Bigtop components from the
-[Juju charm store][] that support common interfaces.
+This source tree contains the charm layers used to build charms for deploying
+Bigtop components.  Built charms are published to the [Juju charm store][]
+and can be deployed directly from there, either individually or with
+[bundles][]. They can also be built from these layers and deployed locally.
 
+For the remainder of this guide, a working Juju installation is assumed to be
+present. If Juju is not yet set up, please follow the [getting-started][]
+instructions prior to deploying locally built charms and bundles.
 
 [Juju charm store]: https://jujucharms.com/
-[bundles]: https://jujucharms.com/u/bigdata-dev/hadoop-processing
+[bundles]: https://jujucharms.com/hadoop-processing
+[getting-started]: https://jujucharms.com/docs/stable/getting-started
 
 
 ## Building the Bigtop Charms
 
-To build these charms, you will need [charm-tools][].  You should also read
-over the developer [Getting Started][] page for an overview of charms and
-building them.  Then, in any of the charm layer directories, use `charm build`.
+To build these charms, you will need [charm-tools][]. You should also read
+over the developer [Getting Started][] page for an overview of developing and
+building charms. Then, in any of the charm layer directories, use `charm 
build`.
 For example:
 
     export JUJU_REPOSITORY=$HOME/charms
-    mkdir $HOME/charms
+    mkdir $JUJU_REPOSITORY
 
     cd bigtop-packages/src/charms/hadoop/layer-hadoop-namenode
     charm build
 
 This will build the NameNode charm, pulling in the appropriate base and
 interface layers from [interfaces.juju.solutions][].  You can get local copies
-of those layers as well using `charm pull-source`:
+of those layers as well by using `charm pull-source`:
 
     export LAYER_PATH=$HOME/layers
     export INTERFACE_PATH=$HOME/interfaces
@@ -57,19 +61,22 @@ of those layers as well using `charm pull-source`:
     charm pull-source layer:apache-bigtop-base
     charm pull-source interface:dfs
 
-You can then deploy the locally built charms individually:
+You can deploy the locally built charms individually, for example:
 
-    juju deploy local:trusty/hadoop-namenode
+    juju deploy $JUJU_REPOSITORY/xenial/hadoop-namenode
 
-You can also use the local version of a bundle:
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is: `juju deploy local:xenial/hadoop-namenode`.
 
-    juju deploy bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
+You can also deploy the local version of a bundle:
 
-> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to
-deploy the local bundle.
+    juju deploy ./bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
 
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+./bigtop-deploy/juju/hadoop-processing/bundle-local.yaml`.
 
 [charm-tools]: https://jujucharms.com/docs/stable/tools-charm-tools
-[Getting Started]: https://jujucharms.com/docs/devel/developer-getting-started
+[Getting Started]: https://jujucharms.com/docs/stable/developer-getting-started
 [interfaces.juju.solutions]: http://interfaces.juju.solutions/
-[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/
+[juju-quickstart]: https://launchpad.net/juju-quickstart

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
index bf46bf7..621a1e8 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/README.md
@@ -14,93 +14,119 @@
   See the License for the specific language governing permissions and
   limitations under the License.
 -->
-## Overview
+# Overview
 
 The Apache Hadoop software library is a framework that allows for the
 distributed processing of large data sets across clusters of computers
 using a simple programming model.
 
-This charm deploys the NameNode component of the Apache Bigtop platform
+This charm deploys the NameNode component of the [Apache Bigtop][] platform
 to provide HDFS master resources.
 
+[Apache Bigtop]: http://bigtop.apache.org/
 
-## Usage
 
-This charm is intended to be deployed via one of the
-[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles).
-For example:
+# Deploying
 
-    juju deploy hadoop-processing
+A working Juju installation is assumed to be present. If Juju is not yet set
+up, please follow the [getting-started][] instructions prior to deploying this
+charm.
 
-> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to
-deploy the bundle.
+This charm is intended to be deployed via one of the [apache bigtop bundles][].
+For example:
 
-This will deploy the Apache Bigtop platform with a workload node
-preconfigured to work with the cluster.
+    juju deploy hadoop-processing
 
-You can also manually load and run map-reduce jobs via the plugin charm
-included in the bundles linked above:
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+hadoop-processing`.
 
-    juju scp my-job.jar plugin/0:
-    juju ssh plugin/0
-    hadoop jar my-job.jar
+This will deploy an Apache Bigtop cluster with this charm acting as the
+NameNode. More information about this deployment can be found in the
+[bundle readme](https://jujucharms.com/hadoop-processing/).
 
+## Network-Restricted Environments
+Charms can be deployed in environments with limited network access. To deploy
+in this environment, configure a Juju model with appropriate proxy and/or
+mirror options. See [Configuring Models][] for more information.
 
-[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/
+[getting-started]: https://jujucharms.com/docs/stable/getting-started
+[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles
+[juju-quickstart]: https://launchpad.net/juju-quickstart
+[Configuring Models]: https://jujucharms.com/docs/stable/models-config
 
 
-## Status and Smoke Test
+# Verifying
 
+## Status
 Apache Bigtop charms provide extended status reporting to indicate when they
 are ready:
 
-    juju status --format=tabular
+    juju status
 
 This is particularly useful when combined with `watch` to track the on-going
 progress of the deployment:
 
-    watch -n 0.5 juju status --format=tabular
+    watch -n 2 juju status
 
-The message for each unit will provide information about that unit's state.
-Once they all indicate that they are ready, you can perform a "smoke test"
-to verify HDFS or YARN services are working as expected. Trigger the
-`smoke-test` action by:
+The message column will provide information about a given unit's state.
+This charm is ready for use once the status message indicates that it is
+ready with datanodes.
 
-    juju action do namenode/0 smoke-test
-    juju action do resourcemanager/0 smoke-test
+## Smoke Test
+This charm provides a `smoke-test` action that can be used to verify the
+application is functioning as expected. Run the action as follows:
 
-After a few seconds or so, you can check the results of the smoke test:
+    juju run-action namenode/0 smoke-test
 
-    juju action status
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action do namenode/0 smoke-test`.
 
-You will see `status: completed` if the smoke test was successful, or
-`status: failed` if it was not.  You can get more information on why it failed
-via:
+Watch the progress of the smoke test actions with:
 
-    juju action fetch <action-id>
+    watch -n 2 juju show-action-status
 
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action status`.
 
-## Deploying in Network-Restricted Environments
+Eventually, the action should settle to `status: completed`.  If it
+reports `status: failed`, the application is not working as expected. Get
+more information about a specific smoke test with:
 
-Charms can be deployed in environments with limited network access. To deploy
-in this environment, you will need a local mirror to serve required packages.
+    juju show-action-output <action-id>
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action fetch <action-id>`.
+
+## Utilities
+This charm includes Hadoop command line and web utilities that can be used
+to verify information about the cluster.
+
+Show the dfsadmin report on the command line with the following:
+
+    juju run --application namenode "su hdfs -c 'hdfs dfsadmin -report'"
+
+To access the HDFS web console, find the `PUBLIC-ADDRESS` of the
+namenode application and expose it:
 
+    juju status namenode
+    juju expose namenode
 
-### Mirroring Packages
+The web interface will be available at the following URL:
 
-You can setup a local mirror for apt packages using squid-deb-proxy.
-For instructions on configuring juju to use this, see the
-[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html).
+        http://NAMENODE_PUBLIC_IP:50070
 
 
-## Contact Information
+# Contact Information
 
 - <bigd...@lists.ubuntu.com>
 
 
-## Hadoop
+# Resources
 
 - [Apache Bigtop](http://bigtop.apache.org/) home page
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
+- [Juju community](https://jujucharms.com/community)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml
index ee93b4c..c2d65ae 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions.yaml
@@ -1,2 +1,2 @@
 smoke-test:
-    description: Verify that HDFS is working by creating and removing a test 
directory.
+    description: Run a simple HDFS smoke test.

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test
index 58ffce2..391b626 100755
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/actions/smoke-test
@@ -22,7 +22,7 @@ from jujubigdata.utils import run_as
 from charms.reactive import is_state
 
 if not is_state('apache-bigtop-namenode.ready'):
-    hookenv.action_fail('NameNode service not yet ready')
+    hookenv.action_fail('Charm is not yet ready')
 
 
 # verify the hdfs-test directory does not already exist

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml
index 332a6e3..3fca827 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/layer.yaml
@@ -1,4 +1,4 @@
-repo: g...@github.com:juju-solutions/layer-hadoop-namenode.git
+repo: 
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode
 includes:
   - 'layer:apache-bigtop-base'
   - 'interface:dfs'

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml
index ab51ce4..a358a6d 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/metadata.yaml
@@ -1,12 +1,12 @@
 name: hadoop-namenode
-summary: HDFS master (NameNode) for Apache Bigtop platform
+summary: HDFS master (NameNode) from Apache Bigtop
 maintainer: Juju Big Data <bigd...@lists.ubuntu.com>
 description: >
   Hadoop is a software platform that lets one easily write and
   run applications that process vast amounts of data.
 
-  This charm manages the HDFS master node (NameNode).
-tags: ["applications", "bigdata", "bigtop", "hadoop", "apache"]
+  This charm provides the HDFS master node (NameNode).
+tags: []
 provides:
   namenode:
     interface: dfs

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py
index c39a609..c8a71da 100644
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/reactive/namenode.py
@@ -15,7 +15,9 @@
 # limitations under the License.
 
 from charms.reactive import is_state, remove_state, set_state, when, when_not
-from charms.layer.apache_bigtop_base import Bigtop, get_layer_opts, get_fqdn
+from charms.layer.apache_bigtop_base import (
+    Bigtop, get_hadoop_version, get_layer_opts, get_fqdn
+)
 from charmhelpers.core import hookenv, host
 from jujubigdata import utils
 from path import Path
@@ -50,6 +52,8 @@ def send_early_install_info(remote):
 def install_namenode():
     hookenv.status_set('maintenance', 'installing namenode')
     bigtop = Bigtop()
+    hdfs_port = get_layer_opts().port('namenode')
+    webhdfs_port = get_layer_opts().port('nn_webapp_http')
     bigtop.render_site_yaml(
         hosts={
             'namenode': get_fqdn(),
@@ -58,6 +62,10 @@ def install_namenode():
             'namenode',
             'mapred-app',
         ],
+        overrides={
+            'hadoop::common_hdfs::hadoop_namenode_port': hdfs_port,
+            'hadoop::common_hdfs::hadoop_namenode_http_port': webhdfs_port,
+        }
     )
     bigtop.trigger_puppet()
 
@@ -96,6 +104,7 @@ def start_namenode():
     for port in get_layer_opts().exposed_ports('namenode'):
         hookenv.open_port(port)
     set_state('apache-bigtop-namenode.started')
+    hookenv.application_version_set(get_hadoop_version())
     hookenv.status_set('maintenance', 'namenode started')
 
 

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py
 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py
index 15c00c9..38aa45b 100755
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-namenode/tests/01-basic-deployment.py
@@ -28,7 +28,7 @@ class TestDeploy(unittest.TestCase):
     """
 
     def test_deploy(self):
-        self.d = amulet.Deployment(series='trusty')
+        self.d = amulet.Deployment(series='xenial')
         self.d.add('namenode', 'hadoop-namenode')
         self.d.setup(timeout=900)
         self.d.sentry.wait(timeout=1800)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
index cbea7f0..405c08a 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/README.md
@@ -14,79 +14,109 @@
   See the License for the specific language governing permissions and
   limitations under the License.
 -->
-## Overview
+# Overview
 
 The Apache Hadoop software library is a framework that allows for the
 distributed processing of large data sets across clusters of computers
 using a simple programming model.
 
-This charm facilitates communication between core Apache Bigtop cluster
-components and workload charms.
+This charm facilitates communication between Hadoop components of an
+[Apache Bigtop][] cluster and workload applications.
 
+[Apache Bigtop]: http://bigtop.apache.org/
 
-## Usage
 
-This charm is intended to be deployed via one of the
-[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles).
-For example:
+# Deploying
 
-    juju deploy hadoop-processing
+A working Juju installation is assumed to be present. If Juju is not yet set
+up, please follow the [getting-started][] instructions prior to deploying this
+charm.
 
-> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to
-deploy the bundle.
+This charm is intended to be deployed via one of the [apache bigtop bundles][].
+For example:
 
-This will deploy the Apache Bigtop platform with a workload node
-preconfigured to work with the cluster.
+    juju deploy hadoop-processing
 
-You could extend this deployment, for example, to analyze data using Apache 
Pig.
-Simply deploy Pig and attach it to the same plugin:
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+hadoop-processing`.
 
-    juju deploy apache-pig pig
-    juju add-relation plugin pig
+This will deploy an Apache Bigtop cluster with a client unit preconfigured to
+work with the cluster. More information about this deployment can be found in 
the
+[bundle readme](https://jujucharms.com/hadoop-processing/).
 
+## Network-Restricted Environments
+Charms can be deployed in environments with limited network access. To deploy
+in this environment, configure a Juju model with appropriate proxy and/or
+mirror options. See [Configuring Models][] for more information.
 
-[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/
+[getting-started]: https://jujucharms.com/docs/stable/getting-started
+[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles
+[juju-quickstart]: https://launchpad.net/juju-quickstart
+[Configuring Models]: https://jujucharms.com/docs/stable/models-config
 
 
-## Status and Smoke Test
+# Verifying
 
+## Status
 Apache Bigtop charms provide extended status reporting to indicate when they
 are ready:
 
-    juju status --format=tabular
+    juju status
 
 This is particularly useful when combined with `watch` to track the on-going
 progress of the deployment:
 
-    watch -n 0.5 juju status --format=tabular
+    watch -n 2 juju status
+
+The message column will provide information about a given unit's state.
+This charm is ready for use once the status message indicates that it is
+ready with hdfs and/or yarn.
+
+## Smoke Test
+This charm provides a `smoke-test` action that can be used to verify the
+application is functioning as expected. Run the action as follows:
+
+    juju run-action plugin/0 smoke-test
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action do plugin/0 smoke-test`.
+
+Watch the progress of the smoke test actions with:
+
+    watch -n 2 juju show-action-status
+
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action status`.
 
-The message for each unit will provide information about that unit's state.
-Once they all indicate that they are ready, you can perform a "smoke test"
-to verify HDFS or YARN services are working as expected. Trigger the
-`smoke-test` action by:
+Eventually, the action should settle to `status: completed`.  If it
+reports `status: failed`, the application is not working as expected. Get
+more information about a specific smoke test with:
 
-    juju action do namenode/0 smoke-test
-    juju action do resourcemanager/0 smoke-test
+    juju show-action-output <action-id>
 
-After a few seconds or so, you can check the results of the smoke test:
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action fetch <action-id>`.
 
-    juju action status
+## Utilities
+This charm includes Hadoop command line utilities that can be used
+to verify information about the cluster.
 
-You will see `status: completed` if the smoke test was successful, or
-`status: failed` if it was not.  You can get more information on why it failed
-via:
+Show the dfsadmin report on the command line with the following:
 
-    juju action fetch <action-id>
+    juju run --application plugin "su hdfs -c 'hdfs dfsadmin -report'"
 
 
-## Contact Information
+# Contact Information
 
 - <bigd...@lists.ubuntu.com>
 
 
-## Resources
+# Resources
 
 - [Apache Bigtop](http://bigtop.apache.org/) home page
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
+- [Juju community](https://jujucharms.com/community)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml
new file mode 100644
index 0000000..c2d65ae
--- /dev/null
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions.yaml
@@ -0,0 +1,2 @@
+smoke-test:
+    description: Run a simple HDFS smoke test.

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test
new file mode 100755
index 0000000..65ba07c
--- /dev/null
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/actions/smoke-test
@@ -0,0 +1,62 @@
+#!/usr/bin/env python3
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+
+from charmhelpers.core import hookenv
+from jujubigdata.utils import run_as
+from charms.reactive import is_state
+
+if not is_state('apache-bigtop-plugin.hdfs.ready'):
+    hookenv.action_fail('Charm is not yet ready')
+
+
+# verify the hdfs-test directory does not already exist
+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
+if '/tmp/hdfs-test' in output:
+    run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test')
+    output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', 
capture_output=True)
+    if 'hdfs-test' in output:
+        hookenv.action_fail('Unable to remove existing hdfs-test directory')
+        sys.exit()
+
+# create the directory
+run_as('ubuntu', 'hdfs', 'dfs', '-mkdir', '-p', '/tmp/hdfs-test')
+run_as('ubuntu', 'hdfs', 'dfs', '-chmod', '-R', '777', '/tmp/hdfs-test')
+
+# verify the newly created hdfs-test subdirectory exists
+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
+for line in output.split('\n'):
+    if '/tmp/hdfs-test' in line:
+        if 'ubuntu' not in line or 'drwxrwxrwx' not in line:
+            hookenv.action_fail('Permissions incorrect for hdfs-test 
directory')
+            sys.exit()
+        break
+else:
+    hookenv.action_fail('Unable to create hdfs-test directory')
+    sys.exit()
+
+# remove the directory
+run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test')
+
+# verify the hdfs-test subdirectory has been removed
+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
+if '/tmp/hdfs-test' in output:
+    hookenv.action_fail('Unable to remove hdfs-test directory')
+    sys.exit()
+
+hookenv.action_set({'outcome': 'success'})

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml
index 5ddc2c9..ceedad7 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/layer.yaml
@@ -1,8 +1,12 @@
-repo: g...@github.com:juju-solutions/layer-hadoop-plugin.git
-includes: ['layer:apache-bigtop-base', 'interface:hadoop-plugin', 
'interface:dfs', 'interface:mapred']
+repo: 
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin
+includes:
+  - 'layer:apache-bigtop-base'
+  - 'interface:hadoop-plugin'
+  - 'interface:dfs'
+  - 'interface:mapred'
 options:
   basic:
     use_venv: true
 metadata:
   deletes:
-    - requires.java
+    - provides.java

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml
index a5fd453..4df86f1 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/metadata.yaml
@@ -1,5 +1,5 @@
 name: hadoop-plugin
-summary: Simplified connection point for Apache Bigtop platform
+summary: Facilitates communication with an Apache Bigtop Hadoop cluster
 maintainer: Juju Big Data <bigd...@lists.ubuntu.com>
 description: >
   Hadoop is a software platform that lets one easily write and
@@ -8,7 +8,7 @@ description: >
   This charm provides a simplified connection point for client / workload
   services which require access to Apache Hadoop. This connection is 
established
   via the Apache Bigtop gateway.
-tags: ["applications", "bigdata", "hadoop", "apache"]
+tags: []
 subordinate: true
 requires:
   namenode:

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py
 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py
index e5b1275..e680002 100644
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/reactive/apache_bigtop_plugin.py
@@ -42,6 +42,7 @@ def install_hadoop_client_hdfs(principal, namenode):
         bigtop.render_site_yaml(hosts=hosts, roles='hadoop-client')
         bigtop.trigger_puppet()
         set_state('apache-bigtop-plugin.hdfs.installed')
+        hookenv.application_version_set(get_hadoop_version())
         hookenv.status_set('maintenance', 'plugin (hdfs) installed')
     else:
         hookenv.status_set('waiting', 'waiting for namenode fqdn')

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py
 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py
index 512630d..815f9fb 100755
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-plugin/tests/01-basic-deployment.py
@@ -29,7 +29,7 @@ class TestDeploy(unittest.TestCase):
     """
 
     def test_deploy(self):
-        self.d = amulet.Deployment(series='trusty')
+        self.d = amulet.Deployment(series='xenial')
         self.d.load({
             'services': {
                 'client': {'charm': 'hadoop-client'},

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
index 0250881..430cc97 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/README.md
@@ -14,142 +14,170 @@
   See the License for the specific language governing permissions and
   limitations under the License.
 -->
-## Overview
+# Overview
 
 The Apache Hadoop software library is a framework that allows for the
 distributed processing of large data sets across clusters of computers
 using a simple programming model.
 
-This charm deploys the ResourceManager component of the Apache Bigtop platform
-to provide YARN master resources.
+This charm deploys the ResourceManager component of the [Apache Bigtop][]
+platform to provide YARN master resources.
 
+[Apache Bigtop]: http://bigtop.apache.org/
 
-## Usage
 
-This charm is intended to be deployed via one of the
-[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles).
-For example:
+# Deploying
 
-    juju deploy hadoop-processing
+A working Juju installation is assumed to be present. If Juju is not yet set
+up, please follow the [getting-started][] instructions prior to deploying this
+charm.
 
-> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to
-deploy the bundle.
+This charm is intended to be deployed via one of the [apache bigtop bundles][].
+For example:
 
-This will deploy the Apache Bigtop platform with a workload node
-preconfigured to work with the cluster.
+    juju deploy hadoop-processing
 
-You can also manually load and run map-reduce jobs via the plugin charm
-included in the bundles linked above:
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+hadoop-processing`.
 
-    juju scp my-job.jar plugin/0:
-    juju ssh plugin/0
-    hadoop jar my-job.jar
+This will deploy an Apache Bigtop cluster with this charm acting as the
+ResourceManager. More information about this deployment can be found in the
+[bundle readme](https://jujucharms.com/hadoop-processing/).
 
+## Network-Restricted Environments
+Charms can be deployed in environments with limited network access. To deploy
+in this environment, configure a Juju model with appropriate proxy and/or
+mirror options. See [Configuring Models][] for more information.
 
-[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/
+[getting-started]: https://jujucharms.com/docs/stable/getting-started
+[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles
+[juju-quickstart]: https://launchpad.net/juju-quickstart
+[Configuring Models]: https://jujucharms.com/docs/stable/models-config
 
 
-## Status and Smoke Test
+# Verifying
 
+## Status
 Apache Bigtop charms provide extended status reporting to indicate when they
 are ready:
 
-    juju status --format=tabular
+    juju status
 
 This is particularly useful when combined with `watch` to track the on-going
 progress of the deployment:
 
-    watch -n 0.5 juju status --format=tabular
+    watch -n 2 juju status
 
-The message for each unit will provide information about that unit's state.
-Once they all indicate that they are ready, you can perform a "smoke test"
-to verify HDFS or YARN services are working as expected. Trigger the
-`smoke-test` action by:
+The message column will provide information about a given unit's state.
+This charm is ready for use once the status message indicates that it is
+ready with nodemanagers.
 
-    juju action do namenode/0 smoke-test
-    juju action do resourcemanager/0 smoke-test
+## Smoke Test
+This charm provides a `smoke-test` action that can be used to verify the
+application is functioning as expected. This action executes the 'yarn'
+smoke tests provided by Apache Bigtop and may take up to
+10 minutes to complete. Run the action as follows:
 
-After a few seconds or so, you can check the results of the smoke test:
+    juju run-action resourcemanager/0 smoke-test
 
-    juju action status
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action do resourcemanager/0 smoke-test`.
 
-You will see `status: completed` if the smoke test was successful, or
-`status: failed` if it was not.  You can get more information on why it failed
-via:
+Watch the progress of the smoke test actions with:
 
-    juju action fetch <action-id>
+    watch -n 2 juju show-action-status
 
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action status`.
 
-## Benchmarking
+Eventually, the action should settle to `status: completed`.  If it
+reports `status: failed`, the application is not working as expected. Get
+more information about a specific smoke test with:
 
-This charm provides several benchmarks to gauge the performance of your
-environment.
+    juju show-action-output <action-id>
 
-The easiest way to run the benchmarks on this service is to relate it to the
-[Benchmark GUI][].  You will likely also want to relate it to the
-[Benchmark Collector][] to have machine-level information collected during the
-benchmark, for a more complete picture of how the machine performed.
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action fetch <action-id>`.
 
-[Benchmark GUI]: https://jujucharms.com/benchmark-gui/
-[Benchmark Collector]: https://jujucharms.com/benchmark-collector/
+## Utilities
+This charm includes Hadoop command line and web utilities that can be used
+to verify information about the cluster.
 
-However, each benchmark is also an action that can be called manually:
+Show the running nodes on the command line with the following:
 
-        $ juju action do resourcemanager/0 nnbench
-        Action queued with id: 55887b40-116c-4020-8b35-1e28a54cc622
-        $ juju action fetch --wait 0 55887b40-116c-4020-8b35-1e28a54cc622
+    juju run --application resourcemanager "su yarn -c 'yarn node -list'"
 
-        results:
-          meta:
-            composite:
-              direction: asc
-              units: secs
-              value: "128"
-            start: 2016-02-04T14:55:39Z
-            stop: 2016-02-04T14:57:47Z
-          results:
-            raw: '{"BAD_ID": "0", "FILE: Number of read operations": "0", 
"Reduce input groups":
-              "8", "Reduce input records": "95", "Map output bytes": "1823", 
"Map input records":
-              "12", "Combine input records": "0", "HDFS: Number of bytes 
read": "18635", "FILE:
-              Number of bytes written": "32999982", "HDFS: Number of write 
operations": "330",
-              "Combine output records": "0", "Total committed heap usage 
(bytes)": "3144749056",
-              "Bytes Written": "164", "WRONG_LENGTH": "0", "Failed Shuffles": 
"0", "FILE:
-              Number of bytes read": "27879457", "WRONG_MAP": "0", "Spilled 
Records": "190",
-              "Merged Map outputs": "72", "HDFS: Number of large read 
operations": "0", "Reduce
-              shuffle bytes": "2445", "FILE: Number of large read operations": 
"0", "Map output
-              materialized bytes": "2445", "IO_ERROR": "0", "CONNECTION": "0", 
"HDFS: Number
-              of read operations": "567", "Map output records": "95", "Reduce 
output records":
-              "8", "WRONG_REDUCE": "0", "HDFS: Number of bytes written": 
"27412", "GC time
-              elapsed (ms)": "603", "Input split bytes": "1610", "Shuffled 
Maps ": "72", "FILE:
-              Number of write operations": "0", "Bytes Read": "1490"}'
-        status: completed
-        timing:
-          completed: 2016-02-04 14:57:48 +0000 UTC
-          enqueued: 2016-02-04 14:55:14 +0000 UTC
-          started: 2016-02-04 14:55:27 +0000 UTC
+To access the Resource Manager web consoles, find the `PUBLIC-ADDRESS` of the
+resourcemanager application and expose it:
 
+    juju status resourcemanager
+    juju expose resourcemanager
 
-## Deploying in Network-Restricted Environments
+The YARN and Job History web interfaces will be available at the following 
URLs:
 
-Charms can be deployed in environments with limited network access. To deploy
-in this environment, you will need a local mirror to serve required packages.
+    http://RESOURCEMANAGER_PUBLIC_IP:8088
+    http://RESOURCEMANAGER_PUBLIC_IP:19888
+
+
+# Benchmarking
+
+This charm provides several benchmarks to gauge the performance of the
+cluster. Each benchmark is an action that can be run with `juju run-action`:
 
+    $ juju actions resourcemanager
+    ACTION      DESCRIPTION
+    mrbench     Mapreduce benchmark for small jobs
+    nnbench     Load test the NameNode hardware and configuration
+    smoke-test  Run an Apache Bigtop smoke test.
+    teragen     Generate data with teragen
+    terasort    Runs teragen to generate sample data, and then runs terasort 
to sort that data
+    testdfsio   DFS IO Testing
 
-### Mirroring Packages
+    $ juju run-action resourcemanager/0 nnbench
+    Action queued with id: 55887b40-116c-4020-8b35-1e28a54cc622
 
-You can setup a local mirror for apt packages using squid-deb-proxy.
-For instructions on configuring juju to use this, see the
-[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html).
+    $ juju show-action-output 55887b40-116c-4020-8b35-1e28a54cc622
+    results:
+      meta:
+        composite:
+          direction: asc
+          units: secs
+          value: "128"
+        start: 2016-02-04T14:55:39Z
+        stop: 2016-02-04T14:57:47Z
+      results:
+        raw: '{"BAD_ID": "0", "FILE: Number of read operations": "0", "Reduce 
input groups":
+          "8", "Reduce input records": "95", "Map output bytes": "1823", "Map 
input records":
+          "12", "Combine input records": "0", "HDFS: Number of bytes read": 
"18635", "FILE:
+          Number of bytes written": "32999982", "HDFS: Number of write 
operations": "330",
+          "Combine output records": "0", "Total committed heap usage (bytes)": 
"3144749056",
+          "Bytes Written": "164", "WRONG_LENGTH": "0", "Failed Shuffles": "0", 
"FILE:
+          Number of bytes read": "27879457", "WRONG_MAP": "0", "Spilled 
Records": "190",
+          "Merged Map outputs": "72", "HDFS: Number of large read operations": 
"0", "Reduce
+          shuffle bytes": "2445", "FILE: Number of large read operations": 
"0", "Map output
+          materialized bytes": "2445", "IO_ERROR": "0", "CONNECTION": "0", 
"HDFS: Number
+          of read operations": "567", "Map output records": "95", "Reduce 
output records":
+          "8", "WRONG_REDUCE": "0", "HDFS: Number of bytes written": "27412", 
"GC time
+          elapsed (ms)": "603", "Input split bytes": "1610", "Shuffled Maps ": 
"72", "FILE:
+          Number of write operations": "0", "Bytes Read": "1490"}'
+    status: completed
+    timing:
+      completed: 2016-02-04 14:57:48 +0000 UTC
+      enqueued: 2016-02-04 14:55:14 +0000 UTC
+      started: 2016-02-04 14:55:27 +0000 UTC
 
 
-## Contact Information
+# Contact Information
 
 - <bigd...@lists.ubuntu.com>
 
 
-## Hadoop
+# Resources
 
 - [Apache Bigtop](http://bigtop.apache.org/) home page
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
+- [Juju community](https://jujucharms.com/community)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml
index da4fc08..77a644b 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions.yaml
@@ -1,6 +1,5 @@
 smoke-test:
-  description: >
-    Verify that YARN is working as expected by running a small (1MB) terasort.
+    description: Run an Apache Bigtop smoke test.
 mrbench:
     description: Mapreduce benchmark for small jobs
     params:

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test
 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test
index 9ef33a9..3280e79 100755
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/smoke-test
@@ -1,4 +1,4 @@
-#!/bin/bash
+#!/usr/bin/env python3
 
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,66 +15,34 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-set -ex
+import sys
+sys.path.append('lib')
 
-if ! charms.reactive is_state 'apache-bigtop-resourcemanager.ready'; then
-    action-fail 'ResourceManager not yet ready'
-    exit
-fi
+from charmhelpers.core import hookenv
+from charms.layer.apache_bigtop_base import Bigtop
+from charms.reactive import is_state
 
-IN_DIR='/tmp/smoke_test_in'
-OUT_DIR='/tmp/smoke_test_out'
-SIZE=10000
-OPTIONS=''
 
-MAPS=1
-REDUCES=1
-NUMTASKS=1
-COMPRESSION='LocalDefault'
+def fail(msg, output=None):
+    if output:
+        hookenv.action_set({'output': output})
+    hookenv.action_fail(msg)
+    sys.exit()
 
-OPTIONS="${OPTIONS} -D mapreduce.job.maps=${MAPS}"
-OPTIONS="${OPTIONS} -D mapreduce.job.reduces=${REDUCES}"
-OPTIONS="${OPTIONS} -D mapreduce.job.jvm.numtasks=${NUMTASKS}"
-if [ $COMPRESSION == 'Disable' ] ; then
-        OPTIONS="${OPTIONS} -D mapreduce.map.output.compress=false"
-elif [ $COMPRESSION == 'LocalDefault' ] ; then
-        OPTIONS="${OPTIONS}"
-else
-        OPTIONS="${OPTIONS} -D mapreduce.map.output.compress=true -D 
mapred.map.output.compress.codec=org.apache.hadoop.io.compress.${COMPRESSION}Codec"
-fi
+if not is_state('apache-bigtop-resourcemanager.ready'):
+    fail('Charm is not yet ready to run the Bigtop smoke test(s)')
 
-# create dir to store results
-RUN=`date +%s`
-RESULT_DIR=/opt/terasort-results
-RESULT_LOG=${RESULT_DIR}/${RUN}.$$.log
-mkdir -p ${RESULT_DIR}
-chown -R hdfs ${RESULT_DIR}
+# Bigtop smoke test components
+smoke_components = ['yarn']
 
-# clean out any previous data (must be run as the hdfs user)
-su hdfs << EOF
-if hadoop fs -stat ${IN_DIR} &> /dev/null; then
-    hadoop fs -rm -r -skipTrash ${IN_DIR} || true
-fi
-if hadoop fs -stat ${OUT_DIR} &> /dev/null; then
-    hadoop fs -rm -r -skipTrash ${OUT_DIR} || true
-fi
-EOF
+# Env required by test components
+smoke_env = {
+    'HADOOP_CONF_DIR': '/etc/hadoop/conf',
+}
 
-START=`date +%s`
-# NB: Escaped vars in the block below (e.g., \${HADOOP_MAPRED_HOME}) come from
-# the environment while non-escaped vars (e.g., ${IN_DIR}) are parameterized
-# from this outer scope
-su hdfs << EOF
-. /etc/default/hadoop
-echo 'generating data'
-hadoop jar \${HADOOP_MAPRED_HOME}/hadoop-mapreduce-examples-*.jar teragen 
${SIZE} ${IN_DIR} &>/dev/null
-echo 'sorting data'
-hadoop jar \${HADOOP_MAPRED_HOME}/hadoop-mapreduce-examples-*.jar terasort 
${OPTIONS} ${IN_DIR} ${OUT_DIR} &> ${RESULT_LOG}
-EOF
-STOP=`date +%s`
-
-if ! grep -q 'Bytes Written=1000000' ${RESULT_LOG}; then
-    action-fail 'smoke-test failed'
-    action-set log="$(cat ${RESULT_LOG})"
-fi
-DURATION=`expr $STOP - $START`
+bigtop = Bigtop()
+result = bigtop.run_smoke_tests(smoke_components, smoke_env)
+if result == 'success':
+    hookenv.action_set({'outcome': result})
+else:
+    fail('{} smoke tests failed'.format(smoke_components), result)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml
index ad0b569..c2e3420 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/layer.yaml
@@ -1,4 +1,4 @@
-repo: g...@github.com:juju-solutions/layer-hadoop-resourcemanager.git
+repo: 
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager
 includes:
   - 'layer:apache-bigtop-base'
   - 'interface:dfs'

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml
index 82b82cd..695d5bf 100644
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/metadata.yaml
@@ -1,12 +1,12 @@
 name: hadoop-resourcemanager
-summary: YARN master (ResourceManager) for Apache Bigtop platform
+summary: YARN master (ResourceManager) from Apache Bigtop
 maintainer: Juju Big Data <bigd...@lists.ubuntu.com>
 description: >
   Hadoop is a software platform that lets one easily write and
   run applications that process vast amounts of data.
 
-  This charm manages the YARN master node (ResourceManager).
-tags: ["applications", "bigdata", "bigtop", "hadoop", "apache"]
+  This charm provides the YARN master node (ResourceManager).
+tags: []
 provides:
   resourcemanager:
     interface: mapred

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py
 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py
index afca26b..3f3e9ae 100644
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py
@@ -15,7 +15,9 @@
 # limitations under the License.
 
 from charms.reactive import is_state, remove_state, set_state, when, when_not
-from charms.layer.apache_bigtop_base import Bigtop, get_layer_opts, get_fqdn
+from charms.layer.apache_bigtop_base import (
+    Bigtop, get_hadoop_version, get_layer_opts, get_fqdn
+)
 from charmhelpers.core import hookenv, host
 from jujubigdata import utils
 
@@ -61,11 +63,32 @@ def install_resourcemanager(namenode):
     """
     if namenode.namenodes():
         hookenv.status_set('maintenance', 'installing resourcemanager')
+        # Hosts
         nn_host = namenode.namenodes()[0]
         rm_host = get_fqdn()
+
+        # Ports
+        rm_ipc = get_layer_opts().port('resourcemanager')
+        rm_http = get_layer_opts().port('rm_webapp_http')
+        jh_ipc = get_layer_opts().port('jobhistory')
+        jh_http = get_layer_opts().port('jh_webapp_http')
+
         bigtop = Bigtop()
-        hosts = {'namenode': nn_host, 'resourcemanager': rm_host}
-        bigtop.render_site_yaml(hosts=hosts, roles='resourcemanager')
+        bigtop.render_site_yaml(
+            hosts={
+                'namenode': nn_host,
+                'resourcemanager': rm_host,
+            },
+            roles=[
+                'resourcemanager',
+            ],
+            overrides={
+                'hadoop::common_yarn::hadoop_rm_port': rm_ipc,
+                'hadoop::common_yarn::hadoop_rm_webapp_port': rm_http,
+                'hadoop::common_mapred_app::mapreduce_jobhistory_port': jh_ipc,
+                'hadoop::common_mapred_app::mapreduce_jobhistory_webapp_port': 
jh_http,
+            }
+        )
         bigtop.trigger_puppet()
 
         # /etc/hosts entries from the KV are not currently used for bigtop,
@@ -104,7 +127,8 @@ def start_resourcemanager(namenode):
     for port in get_layer_opts().exposed_ports('resourcemanager'):
         hookenv.open_port(port)
     set_state('apache-bigtop-resourcemanager.started')
-    hookenv.status_set('active', 'ready')
+    hookenv.application_version_set(get_hadoop_version())
+    hookenv.status_set('maintenance', 'resourcemanager started')
 
 
 ###############################################################################

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py
 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py
index 65dbbbb..3b69454 100755
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/tests/01-basic-deployment.py
@@ -28,7 +28,7 @@ class TestDeploy(unittest.TestCase):
     """
 
     def test_deploy(self):
-        self.d = amulet.Deployment(series='trusty')
+        self.d = amulet.Deployment(series='xenial')
         self.d.add('resourcemanager', 'hadoop-resourcemanager')
         self.d.setup(timeout=900)
         self.d.sentry.wait(timeout=1800)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
index 2580072..4bf240d 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/README.md
@@ -14,103 +14,116 @@
   See the License for the specific language governing permissions and
   limitations under the License.
 -->
-## Overview
+# Overview
 
 The Apache Hadoop software library is a framework that allows for the
 distributed processing of large data sets across clusters of computers
 using a simple programming model.
 
 This charm deploys a combined slave node running the NodeManager
-and DataNode components of the Apache Bigtop platform
+and DataNode components of the [Apache Bigtop][] platform
 to provide YARN compute and HDFS storage resources.
 
+[Apache Bigtop]: http://bigtop.apache.org/
 
-## Usage
 
-This charm is intended to be deployed via one of the
-[apache bigtop bundles](https://jujucharms.com/u/bigdata-dev/#bundles).
-For example:
+# Deploying
 
-    juju deploy hadoop-processing
+A working Juju installation is assumed to be present. If Juju is not yet set
+up, please follow the [getting-started][] instructions prior to deploying this
+charm.
 
-> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to
-deploy the bundle.
+This charm is intended to be deployed via one of the [apache bigtop bundles][].
+For example:
 
-This will deploy the Apache Bigtop platform with a workload node
-preconfigured to work with the cluster.
+    juju deploy hadoop-processing
 
-You can also manually load and run map-reduce jobs via the plugin charm
-included in the bundles linked above:
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, use [juju-quickstart][] with the following syntax: `juju quickstart
+hadoop-processing`.
 
-    juju scp my-job.jar plugin/0:
-    juju ssh plugin/0
-    hadoop jar my-job.jar
+This will deploy an Apache Bigtop cluster with 3 units of this charm acting as
+the combined DataNode/NodeManager application. More information about this
+deployment can be found in the [bundle 
readme](https://jujucharms.com/hadoop-processing/).
 
+## Network-Restricted Environments
+Charms can be deployed in environments with limited network access. To deploy
+in this environment, configure a Juju model with appropriate proxy and/or
+mirror options. See [Configuring Models][] for more information.
 
-[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/
+[getting-started]: https://jujucharms.com/docs/stable/getting-started
+[apache bigtop bundles]: https://jujucharms.com/u/bigdata-charmers/#bundles
+[juju-quickstart]: https://launchpad.net/juju-quickstart
+[Configuring Models]: https://jujucharms.com/docs/stable/models-config
 
 
-## Status and Smoke Test
+# Verifying
 
+## Status
 Apache Bigtop charms provide extended status reporting to indicate when they
 are ready:
 
-    juju status --format=tabular
+    juju status
 
 This is particularly useful when combined with `watch` to track the on-going
 progress of the deployment:
 
-    watch -n 0.5 juju status --format=tabular
+    watch -n 2 juju status
 
-The message for each unit will provide information about that unit's state.
-Once they all indicate that they are ready, you can perform a "smoke test"
-to verify HDFS or YARN services are working as expected. Trigger the
-`smoke-test` action by:
+The message column will provide information about a given unit's state.
+This charm is ready for use once the status message indicates that it is
+ready as a datanode/nodemanager.
 
-    juju action do namenode/0 smoke-test
-    juju action do resourcemanager/0 smoke-test
+## Smoke Test
+This charm provides a `smoke-test` action that can be used to verify the
+application is functioning as expected. This action executes the 'hdfs'
+and 'mapreduce' smoke tests provided by Apache Bigtop and may take up to
+30 minutes to complete. Run the action as follows:
 
-After a few seconds or so, you can check the results of the smoke test:
+    juju run-action slave/0 smoke-test
 
-    juju action status
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action do slave/0 smoke-test`.
 
-You will see `status: completed` if the smoke test was successful, or
-`status: failed` if it was not.  You can get more information on why it failed
-via:
+Watch the progress of the smoke test actions with:
 
-    juju action fetch <action-id>
+    watch -n 2 juju show-action-status
 
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action status`.
 
-## Scaling
+Eventually, the action should settle to `status: completed`.  If it
+reports `status: failed`, the application is not working as expected. Get
+more information about a specific smoke test with:
 
-The slave node is the "workhorse" of the Hadoop environment. To scale your
-cluster performance and storage capabilities, you can simply add more slave
-units.  For example, to add three more units:
+    juju show-action-output <action-id>
 
-    juju add-unit slave -n 3
+> **Note**: The above assumes Juju 2.0 or greater. If using an earlier version
+of Juju, the syntax is `juju action fetch <action-id>`.
 
 
-## Deploying in Network-Restricted Environments
+# Scaling
 
-Charms can be deployed in environments with limited network access. To deploy
-in this environment, you will need a local mirror to serve required packages.
+To scale the cluster compute and storage capabilities, simply add more
+slave units. To add one unit:
 
+    juju add-unit slave
 
-### Mirroring Packages
+Multiple units may be added at once.  For example, add four more slave units:
 
-You can setup a local mirror for apt packages using squid-deb-proxy.
-For instructions on configuring juju to use this, see the
-[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html).
+    juju add-unit -n4 slave
 
 
-## Contact Information
+# Contact Information
 
 - <bigd...@lists.ubuntu.com>
 
 
-## Hadoop
+# Resources
 
 - [Apache Bigtop](http://bigtop.apache.org/) home page
 - [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
 - [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
-- [Apache Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
+- [Juju community](https://jujucharms.com/community)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml
new file mode 100644
index 0000000..7fbb302
--- /dev/null
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions.yaml
@@ -0,0 +1,3 @@
+smoke-test:
+    description: |
+      Run an Apache Bigtop smoke test. Requires 3 slave units.

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test
new file mode 100755
index 0000000..6dec4b5
--- /dev/null
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/actions/smoke-test
@@ -0,0 +1,49 @@
+#!/usr/bin/env python3
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+sys.path.append('lib')
+
+from charmhelpers.core import hookenv
+from charms.layer.apache_bigtop_base import Bigtop
+from charms.reactive import is_state
+
+
+def fail(msg, output=None):
+    if output:
+        hookenv.action_set({'output': output})
+    hookenv.action_fail(msg)
+    sys.exit()
+
+if not is_state('apache-bigtop-datanode.started'):
+    fail('Charm is not yet ready to run the Bigtop smoke test(s)')
+
+# Bigtop smoke test components
+smoke_components = ['hdfs', 'mapreduce']
+
+# Env required by test components
+smoke_env = {
+    'HADOOP_CONF_DIR': '/etc/hadoop/conf',
+    'HADOOP_MAPRED_HOME': '/usr/lib/hadoop-mapreduce',
+}
+
+bigtop = Bigtop()
+result = bigtop.run_smoke_tests(smoke_components, smoke_env)
+if result == 'success':
+    hookenv.action_set({'outcome': result})
+else:
+    fail('{} smoke tests failed'.format(smoke_components), result)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml
index 73c66e6..e10b9da 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/layer.yaml
@@ -1,2 +1,4 @@
-repo: g...@github.com:juju-solutions/layer-hadoop-slave.git
-includes: ['layer:hadoop-datanode', 'layer:hadoop-nodemanager']
+repo: 
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-slave
+includes:
+  - 'layer:hadoop-datanode'
+  - 'layer:hadoop-nodemanager'

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml
----------------------------------------------------------------------
diff --git a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml
index f0b6cce..e5bbc3c 100644
--- a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml
+++ b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/metadata.yaml
@@ -1,8 +1,8 @@
 name: hadoop-slave
-summary: Combined slave node (DataNode + NodeManager) for Apache Bigtop.
+summary: Combined slave node (DataNode + NodeManager) from Apache Bigtop.
 description: >
   Hadoop is a software platform that lets one easily write and
   run applications that process vast amounts of data.
 
-  This charm manages both the storage node (DataNode) for HDFS and the
+  This charm provides both the storage node (DataNode) for HDFS and the
   compute node (NodeManager) for Yarn.

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py
index 1e6d38f..8690d62 100644
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/reactive/hadoop_status.py
@@ -15,11 +15,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-from charms.reactive import when_any, when_none, is_state
+from charms.reactive import when_any, is_state
 from charmhelpers.core.hookenv import status_set
 
 
-@when_none('namenode.spec.mismatch', 'resourcemanager.spec.mismatch')
 @when_any(
     'bigtop.available',
     'apache-bigtop-datanode.pending',

http://git-wip-us.apache.org/repos/asf/bigtop/blob/312c006e/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py
----------------------------------------------------------------------
diff --git 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py
 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py
index e479078..5899c0f 100755
--- 
a/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py
+++ 
b/bigtop-packages/src/charm/hadoop/layer-hadoop-slave/tests/01-basic-deployment.py
@@ -28,7 +28,7 @@ class TestDeploy(unittest.TestCase):
     """
 
     def test_deploy(self):
-        self.d = amulet.Deployment(series='trusty')
+        self.d = amulet.Deployment(series='xenial')
         self.d.add('slave', 'hadoop-slave')
         self.d.setup(timeout=900)
         self.d.sentry.wait(timeout=1800)

Reply via email to