Author: smohanty
Date: Thu Oct 9 00:27:56 2014
New Revision: 1630262
URL: http://svn.apache.org/r1630262
Log:
Doc update for label, install-package command
Modified:
incubator/slider/site/trunk/content/docs/manpage.md
incubator/slider/site/trunk/content/docs/slider_specs/resource_specification.md
Modified: incubator/slider/site/trunk/content/docs/manpage.md
URL:
http://svn.apache.org/viewvc/incubator/slider/site/trunk/content/docs/manpage.md?rev=1630262&r1=1630261&r2=1630262&view=diff
==============================================================================
--- incubator/slider/site/trunk/content/docs/manpage.md (original)
+++ incubator/slider/site/trunk/content/docs/manpage.md Thu Oct 9 00:27:56 2014
@@ -25,13 +25,13 @@ slider -YARN-hosted applications
Slider enables applications to be dynamically created on a YARN-managed
datacenter.
The program can be used to create, pause, and shutdown the application. It can
also be used to list current active
-and existing but not running "frozen" application instances.
+and existing but not running "stopped" application instances.
## CONCEPTS
1. A *Slider application* is an application packaged to be deployed by Slider.
It consists of one or more distributed *components*
-1. A *Slider application instance* is a slider application configured to be
deployable on a specific YARN cluster, with a specific configuration. An
instance can be *live* -actually running- or *frozen*. When frozen all its
configuration details and instance-specific data are preserved on HDFS.
+1. A *Slider application instance* is a slider application configured to be
deployable on a specific YARN cluster, with a specific configuration. An
instance can be *live* -actually running- or *stopped*. When stopped all its
configuration details and instance-specific data are preserved on HDFS.
1. An *image* is a *tar.gz* file containing binaries used to create the
application. 1. Images are kept in the HDFS filesystem and identified by their
path names; filesystem permissions can be used to share images amongst users.
@@ -49,11 +49,11 @@ and existing but not running "frozen" ap
1. A user can create an application instance.
-1. A live instances can be *frozen*, saving its final state to its application
instance state directory. All running components are shut down.
+1. A live instances can be *stopped*, saving its final state to its
application instance state directory. All running components are shut down.
-1. A frozen instance can be *thawed* -a its components started on or near the
servers where they were previously running.
+1. A stopped instance can be *started* -a its components started on or near
the servers where they were previously running.
-1. A frozen instance can be *destroyed*.
+1. A stopped instance can be *destroyed*.
1. Running instances can be listed.
@@ -101,8 +101,6 @@ Use the specific filesystem URI as an ar
-
-
<!--- =======================================================================
-->
@@ -252,6 +250,23 @@ Example
If unset, the zookeeper quorum defined in the property
`slider.zookeeper.quorum`
is used
+##### `--queue <queue name>`
+The queue to deploy the application to. By default, YARN will pick the queue.
+
+Example
+
+ --queue applications
+
+#### `Examples`
+Create an application by providing `template` and `resources`.
+
+ create hbase1 --template /usr/work/hbase/appConfig.json --resources
/usr/work/hbase/resources.json
+
+Create an application by providing `template` and `resources` and `queue`.
+
+ create hbase1 --template /usr/work/hbase/appConfig.json --resources
/usr/work/hbase/resources.json --queue default
+
+
### `destroy <name>`
Destroy a (stopped) applicaton instance .
@@ -316,25 +331,25 @@ Example
slider flex instance1 --component worker 8 --filesystem hdfs://host:port
slider flex instance1 --component master 2 --filesystem hdfs://host:port
-
-### `freeze <name> [--force] [--wait time] [--message text]`
-(**freeze** has been renamed to **stop** in develop branch)
-freeze the application instance. The running application is stopped. Its
settings are retained in HDFS.
+### `install-package --name <name of the package> --package <package file>
[--replacepkg]`
+Install the application package to the default package location for the user
under ~/.slider/package/<name>. This is the location referred to by the
appConfig.json file provided in the --template parameter in the create command.
-The `--wait` argument can specify a time in seconds to wait for the
application instance to be frozen.
+##### `--name <name of the package>`
+Name of the package. It may be the same as the name provided in the
metainfo.xml. Ensure that the same value is used in the default application
package location specified in the default appConfig.json file.
-The `--force` flag causes YARN asked directly to terminate the application
instance.
-The `--message` argument supplies an optional text message to be used in
-the request: this will appear in the application's diagnostics in the YARN RM
UI.
+##### `--package <package file>`
+Location of the package on local disk.
-If an unknown (or already frozen) application instance is named, no error is
returned.
+##### `--replacepkg`
+Optional. Whether to overwrite an already installed package.
-Examples
+
+Example
- slider freeze instance1 --wait 30
- slider freeze instance2 --force --message "maintenance session"
+ slider install-package --name HBASE --package
/usr/work/package/hbase/slider-hbase-app-package-0.98.4-hadoop2.zip
+ slider install-package --name HBASE --package
/usr/work/package/hbase/slider-hbase-app-package-0.98.4-hadoop2.zip --replacepkg
### `list [name] [--live] [--history] `
@@ -408,7 +423,7 @@ List the configurations exported by of a
-#### `slider registry --getconf <configuration> [--format
(xml|json|properties)] [--dest <path>] [--internal] ` get the configuration
+#### `slider registry --getconf <configuration> [--format
(xml|json|properties)] [--dest <path>] [--internal] `
Get a named configuration in a chosen format. Default: XML
@@ -416,6 +431,20 @@ Get a named configuration in a chosen fo
`--format (xml|json|properties)` defines the output format
+### `start <name> [--wait time`]
+(**start** used to be **thaw** in develop branch)
+
+Resume a stopped application instance, recreating it from its previously saved
state. This will include a best-effort attempt to create the same number of
nodes as before, though their locations may be different.
+
+Examples:
+
+ slider start instance2
+ slider start instance1 --wait 60
+
+
+If the application instance is already running, this command will not affect
it.
+
+
### `status <name> [--out <filename>]`
Get the status of the named application instance in JSON format. A filename
can be used to
@@ -428,19 +457,23 @@ Examples:
slider status instance2 --manager host:port --out status.json
+### `stop <name> [--force] [--wait time] [--message text]`
+(**stop** used to be **freeze** in earlier releases)
-### `thaw <name> [--wait time`]
-(**thaw** has been renamed to **start** in develop branch)
+stop the application instance. The running application is stopped. Its
settings are retained in HDFS.
-Resume a frozen application instance, recreating it from its previously saved
state. This will include a best-effort attempt to create the same number of
nodes as before, though their locations may be different.
+The `--wait` argument can specify a time in seconds to wait for the
application instance to be stopped.
-Examples:
+The `--force` flag causes YARN asked directly to terminate the application
instance.
+The `--message` argument supplies an optional text message to be used in
+the request: this will appear in the application's diagnostics in the YARN RM
UI.
- slider thaw instance2
- slider thaw instance1 --wait 60
+If an unknown (or already stopped) application instance is named, no error is
returned.
+Examples
-If the application instance is already running, this command will not affect
it.
+ slider stop instance1 --wait 30
+ slider stop instance2 --force --message "maintenance session"
### `version`
Modified:
incubator/slider/site/trunk/content/docs/slider_specs/resource_specification.md
URL:
http://svn.apache.org/viewvc/incubator/slider/site/trunk/content/docs/slider_specs/resource_specification.md?rev=1630262&r1=1630261&r2=1630262&view=diff
==============================================================================
---
incubator/slider/site/trunk/content/docs/slider_specs/resource_specification.md
(original)
+++
incubator/slider/site/trunk/content/docs/slider_specs/resource_specification.md
Thu Oct 9 00:27:56 2014
@@ -16,9 +16,15 @@
-->
# Apache Slider Resource Specification
+
+* [Container Failure Policy](#failurepolicy)
+* [Using Labels](#labels)
+* [Specifying Log Aggregation](#logagg)
+
+
Resource specification is an input to Slider to specify the Yarn resource
needs for each component type that belong to the application.
-An example resource requirement for an application that has two components
"master" and "worker" is as follows. Slider will automatically add the
requirements for the AppMaster for the application. This compoent is named
"slider-appmaster".
+An example resource requirement for an application that has two components
"master" and "worker" is as follows. Slider will automatically add the
requirements for the AppMaster for the application. This component is named
"slider-appmaster".
Some parameters that can be specified for a component instance include:
@@ -53,7 +59,7 @@ Sample:
}
}
-## Container Failure Policy
+## <a name="failurepolicy"></a>Container Failure Policy
YARN containers hosting component instances may fail. This can happen because
of
@@ -164,7 +170,7 @@ are requested, the failure threshold per
There are ten worker components requested; the failure threshold for these
components is overridden to be fifteen. This allows all workers to fail and
-the cluster to recover âbut only anothe five failures would be tolerated
+the cluster to recover âbut only another five failures would be tolerated
for the remaining hour.
These failure thresholds are all heuristics. When initially configuring an
@@ -173,4 +179,56 @@ which are frequently failing due to conf
In a production application, large failure thresholds and/or shorter windows
ensures that the application is resilient to transient failures of the
underlying
-YARN cluster and hardware.
\ No newline at end of file
+YARN cluster and hardware.
+
+## <a name="labels"></a>Using Labels
+The resources.json file can be used to specify the labels to be used when
allocating containers for the components. The details of the YARN Label feature
can be found at [YARN-796](https://issues.apache.org/jira/browse/YARN-796).
+
+In summary:
+* Nodes can be assigned one or more labels
+* Capacity Queues can be defined with access to one or more labels
+* Ensure application components are associated with appropriate label
expressions
+* Create the application using specific queue
+
+This way, you can gurantee that a certain set of nodes are reserved for an
application or for a component within an application.
+
+Label expression is specified through property "yarn.label.expression". When
no label expression is specified then it is assummed that only non-labeled
nodes are used when allocating containers for component instances.
+
+If label expression is specified for slider-appmaster then it also becomes the
default label expression for all component. To take advantage of default label
expression leave out the property (see HBASE_REGIONSERVER in the example).
Label expression with empty string ("yarn.label.expression":"") means nodes
without labels.
+
+### Example
+
+Here is a `resource.json` file for an HBase cluster which uses labels. The
label for the application instance is "hbase1" and the label expression for the
HBASE_MASTER components is "hbase1_master". HBASE_REGIONSERVER instances will
automatically use label "hbase1". Alternatively, if you specify
("yarn.label.expression":"") for HBASE_REGIONSERVER then the containers will
only be allocated on nodes with no labels.
+
+ {
+ "schema": "http://example.org/specification/v2.0.0",
+ "metadata": {
+ },
+ "global": {
+ },
+ "components": {
+ "HBASE_MASTER": {
+ "yarn.role.priority": "1",
+ "yarn.component.instances": "1",
+ "yarn.label.expression":"hbase1_master"
+ },
+ "HBASE_REGIONSERVER": {
+ "yarn.role.priority": "1",
+ "yarn.component.instances": "1",
+ },
+ "slider-appmaster": {
+ "yarn.label.expression":"hbase1"
+ }
+ }
+ }
+
+Specifically, for the above example you will need:
+* Create two labels, `hbase1` and `hbase1_master` (use yarn rmadmin commands)
+* Assign the labels to nodes (use yarn rmadmin commands)
+* Perform refresh queue (yarn -refreshqueue)
+* Create a queue by defining it in the capacity scheduler config
+* Allow the queue to access to the labels and ensure that appropriate min/max
capacity is assigned
+* Perform refresh queue (yarn -refreshqueue)
+* Create the Slider application against the above queue using parameter
`--queue` while creating the application
+
+## <a name="logagg"></a>Using Log Aggregation
\ No newline at end of file