Author: buildbot
Date: Thu Nov 19 18:42:30 2015
New Revision: 972982
Log:
Staging update by buildbot for slider
Modified:
websites/staging/slider/trunk/content/ (props changed)
websites/staging/slider/trunk/content/developing/functional_tests.html
websites/staging/slider/trunk/content/developing/releasing.html
websites/staging/slider/trunk/content/developing/testing.html
websites/staging/slider/trunk/content/docs/api/slider_REST_api_v2.html
websites/staging/slider/trunk/content/docs/getting_started.html
websites/staging/slider/trunk/content/docs/security.html
websites/staging/slider/trunk/content/docs/troubleshooting.html
Propchange: websites/staging/slider/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Thu Nov 19 18:42:30 2015
@@ -1 +1 @@
-1715236
+1715237
Modified: websites/staging/slider/trunk/content/developing/functional_tests.html
==============================================================================
--- websites/staging/slider/trunk/content/developing/functional_tests.html
(original)
+++ websites/staging/slider/trunk/content/developing/functional_tests.html Thu
Nov 19 18:42:30 2015
@@ -257,7 +257,7 @@ invoked.</p>
<li>The standard <code>-site.xml</code> files are loaded by the JUnit test
runner, to bond
the test classes to the YARN cluster.</li>
<li>The property is used to set the environment variable
<code>HADOOP_CONF_DIR</code>
-before the <code>bin/slider</code> or bin\slider.py` script is executed.</li>
+before the <code>bin/slider</code> or <code>bin\slider.py</code> script is
executed.</li>
</ol>
<p><strong>Note 1:</strong> a path can be set relative to
${SLIDER_CONF_DIR}</p>
<div class="codehilite"><pre> <span class="nt"><property></span>
Modified: websites/staging/slider/trunk/content/developing/releasing.html
==============================================================================
--- websites/staging/slider/trunk/content/developing/releasing.html (original)
+++ websites/staging/slider/trunk/content/developing/releasing.html Thu Nov 19
18:42:30 2015
@@ -202,13 +202,14 @@ h2:hover > .headerlink, h3:hover > .head
<p>As well as everything needed to build slider, there are some extra
requirements
for releasing:</p>
<ol>
-<li>Shell: (Currently: Bash; some <code>fish</code> examples too)</li>
+<li>Shell: <code>bash</code></li>
<li><a href="http://danielkummer.github.io/git-flow-cheatsheet/">git
flow</a></li>
<li>OS/X and windows: <a href="http://www.sourcetreeapp.com/">Atlassian
SourceTree</a>.
This can perform the git flow operations, as well as show the state of your
git graph.</li>
</ol>
<h3 id="before-you-begin">Before you begin<a class="headerlink"
href="#before-you-begin" title="Permanent link">¶</a></h3>
+<p>Read the <a
href="http://incubator.apache.org/guides/releasemanagement.html">ASF incubator
release manual</a></p>
<p>Check out the latest version of the branch to be released,
run the tests. This should be done on a checked out
version of the code that is not the one you are developing on
@@ -222,8 +223,8 @@ according to the instructions in <a href
create HBase and Accumulo clusters in the YARN cluster.</p>
<p><em>Make sure that the integration tests are passing (and not being
skipped) before
starting to make a release</em></p>
-<p><em>3.</em> Make sure there are no uncommitted files in your local repo.
</p>
-<p><em>4.</em> If you are not building against a stable Hadoop release</p>
+<p>Make sure there are no uncommitted files in your local repo. </p>
+<p>If you are not building against a stable Hadoop release</p>
<ol>
<li>Check out the Hadoop branch you intend to build and test against âand
include in
the redistributable artifacts.</li>
@@ -532,7 +533,7 @@ Clone this project and read its instruct
<h2 id="close-the-release-in-nexus">Close the release in Nexus<a
class="headerlink" href="#close-the-release-in-nexus" title="Permanent
link">¶</a></h2>
<ol>
<li>log in to <a
href="https://repository.apache.org/index.html">https://repository.apache.org/index.html</a>
-with your ASF username & LDAP password</li>
+with your ASF username and LDAP password</li>
<li>go to <a
href="https://repository.apache.org/index.html#stagingRepositories">Staging
Repositories</a></li>
<li>find the latest slider repository in the list</li>
<li>select it; </li>
Modified: websites/staging/slider/trunk/content/developing/testing.html
==============================================================================
--- websites/staging/slider/trunk/content/developing/testing.html (original)
+++ websites/staging/slider/trunk/content/developing/testing.html Thu Nov 19
18:42:30 2015
@@ -208,7 +208,7 @@ h2:hover > .headerlink, h3:hover > .head
<p>Slider core contains a suite of tests that are designed to run on the local
machine,
using Hadoop's <code>MiniDFSCluster</code> and <code>MiniYARNCluster</code>
classes to create small,
one-node test clusters. All the YARN/HDFS code runs in the JUnit process; the
-AM and spawned processeses run independently.</p>
+AM and spawned processes run independently.</p>
</div>
<div id="footer">
Modified: websites/staging/slider/trunk/content/docs/api/slider_REST_api_v2.html
==============================================================================
--- websites/staging/slider/trunk/content/docs/api/slider_REST_api_v2.html
(original)
+++ websites/staging/slider/trunk/content/docs/api/slider_REST_api_v2.html Thu
Nov 19 18:42:30 2015
@@ -563,12 +563,12 @@ with a READ-only JSON view of the cluste
<p>The live view of what is going on in the application under
<code>/application/model</code>.</p>
</li>
</ol>
-<h2 id="application">/application<a class="headerlink" href="#application"
title="Permanent link">¶</a></h2>
+<h2 id="application"><code>/application</code><a class="headerlink"
href="#application" title="Permanent link">¶</a></h2>
<h3 id="all-application-resources">All Application resources<a
class="headerlink" href="#all-application-resources" title="Permanent
link">¶</a></h3>
<p>All entries will be under the service path <code>/application</code>, which
itself is under the <code>/ws/v1/</code> path of the Slider web interface.</p>
-<h2 id="applicationmodel">/application/model/ :<a class="headerlink"
href="#applicationmodel" title="Permanent link">¶</a></h2>
-<h3 id="get-and-for-some-urls-put-view-of-the-specification">GET/ and, for
some URLs, PUT view of the specification<a class="headerlink"
href="#get-and-for-some-urls-put-view-of-the-specification" title="Permanent
link">¶</a></h3>
-<h3 id="applicationmodeldesired">/application/model/desired/<a
class="headerlink" href="#applicationmodeldesired" title="Permanent
link">¶</a></h3>
+<h2 id="applicationmodel"><code>/application/model/</code> :<a
class="headerlink" href="#applicationmodel" title="Permanent
link">¶</a></h2>
+<h3 id="get-and-for-some-urls-put-view-of-the-specification">GET and, for
some URLs, PUT view of the specification<a class="headerlink"
href="#get-and-for-some-urls-put-view-of-the-specification" title="Permanent
link">¶</a></h3>
+<h3 id="applicationmodeldesired"><code>/application/model/desired/</code><a
class="headerlink" href="#applicationmodeldesired" title="Permanent
link">¶</a></h3>
<p>This is where the specification of the application: resources and
configuration, can be read and written. </p>
<ol>
<li>
@@ -578,9 +578,9 @@ with a READ-only JSON view of the cluste
<p>Write accesses to <code>configuration</code> will only take effect on a
cluster upgrade or restart</p>
</li>
</ol>
-<h3 id="applicationmodelresolved">/application/model/resolved/<a
class="headerlink" href="#applicationmodelresolved" title="Permanent
link">¶</a></h3>
+<h3 id="applicationmodelresolved"><code>/application/model/resolved/</code><a
class="headerlink" href="#applicationmodelresolved" title="Permanent
link">¶</a></h3>
<p>The resolved specification, the one where we implement the inheritance,
and, when we eventually do x-refs, all non-LAZY references. This lets the
caller see the final configuration model.</p>
-<h3 id="applicationmodelinternal">/application/model/internal/<a
class="headerlink" href="#applicationmodelinternal" title="Permanent
link">¶</a></h3>
+<h3 id="applicationmodelinternal"><code>/application/model/internal/</code><a
class="headerlink" href="#applicationmodelinternal" title="Permanent
link">¶</a></h3>
<p>Read-only view of <code>internal.json</code>. Exported for diagnostics and
completeness.</p>
<h2 id="applicationlive">/application/live/ :<a class="headerlink"
href="#applicationlive" title="Permanent link">¶</a></h2>
<h3 id="get-and-delete-view-of-the-live-application">GET and DELETE view of
the live application<a class="headerlink"
href="#get-and-delete-view-of-the-live-application" title="Permanent
link">¶</a></h3>
@@ -612,7 +612,7 @@ DELETE node_id will decommission all con
<p>"system" state: AM state, outstanding requests, upgrade in progress</p>
</li>
</ol>
-<h2 id="applicationactions">/application/actions<a class="headerlink"
href="#applicationactions" title="Permanent link">¶</a></h2>
+<h2 id="applicationactions"><code>/application/actions</code><a
class="headerlink" href="#applicationactions" title="Permanent
link">¶</a></h2>
<h3 id="post-state-changing-operations">POST state changing operations<a
class="headerlink" href="#post-state-changing-operations" title="Permanent
link">¶</a></h3>
<p>These are for operations which are hard to represent in a simple REST view
within the AM itself.</p>
<h1 id="proposed-state-query-operations">Proposed State Query Operations<a
class="headerlink" href="#proposed-state-query-operations" title="Permanent
link">¶</a></h1>
@@ -631,8 +631,7 @@ DELETE node_id will decommission all con
<td>desired/resources.json extended with statistics of the actual pending,
and failed resource allocations.</td>
</tr>
<tr>
- <td>live/containers
-</td>
+ <td>live/containers</td>
<td>sorted list of container IDs</td>
</tr>
<tr>
@@ -725,44 +724,40 @@ before an updated value is visible.</p>
</li>
</ol>
<h1 id="non-normative-example-data-structures">Non-normative Example Data
structures<a class="headerlink" href="#non-normative-example-data-structures"
title="Permanent link">¶</a></h1>
-<h2 id="applicationliveresources">application/live/resources<a
class="headerlink" href="#applicationliveresources" title="Permanent
link">¶</a></h2>
-<p>The contents of application/live/resources on an application which only has
an application master deployed. The entries in italic are the statistics
related to the live state; the remainder the original values.</p>
-<div class="codehilite"><pre><span class="p">{</span>
- "<span class="n">schema</span>" <span class="p">:</span>
"<span class="n">http</span><span class="p">:</span><span
class="o">//</span><span class="n">example</span><span class="p">.</span><span
class="n">org</span><span class="o">/</span><span
class="n">specification</span><span class="o">/</span><span
class="n">v2</span><span class="p">.</span>0<span
class="p">.</span>0"<span class="p">,</span>
- "<span class="n">metadata</span>" <span class="p">:</span> <span
class="p">{</span> <span class="p">},</span>
- "<span class="k">global</span>" <span class="p">:</span> <span
class="p">{</span> <span class="p">},</span>
- "<span class="n">credentials</span>" <span class="p">:</span>
<span class="p">{</span> <span class="p">},</span>
- "<span class="n">components</span>" <span class="p">:</span> <span
class="p">{</span>
- "<span class="n">slider</span><span class="o">-</span><span
class="n">appmaster</span>" <span class="p">:</span> <span
class="p">{</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">memory</span>" <span class="p">:</span> "1024"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">vcores</span>" <span class="p">:</span> "1"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">instances</span>" <span class="p">:</span> "1"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">instances</span><span class="p">.</span><span
class="n">requesting</span>" <span class="p">:</span> "0"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">instances</span><span class="p">.</span><span
class="n">actual</span>" <span class="p">:</span> "1"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">instances</span><span class="p">.</span><span
class="n">releasing</span>" <span class="p">:</span> "0"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">instances</span><span class="p">.</span><span
class="n">failed</span>" <span class="p">:</span> "0"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">instances</span><span class="p">.</span><span
class="n">completed</span>" <span class="p">:</span> "0"<span
class="p">,</span>
- "<span class="n">yarn</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">instances</span><span class="p">.</span><span
class="n">started</span>" <span class="p">:</span> "1"
-
- <span class="p">}</span>
-
- <span class="p">}</span>
-
-<span class="p">}</span>
+<h2 id="applicationliveresources"><code>application/live/resources</code><a
class="headerlink" href="#applicationliveresources" title="Permanent
link">¶</a></h2>
+<p>The contents of application/live/resources on an application which only has
an application master deployed. The entries in italic are the statistics
related to the live state; the remainder the original values.
+```
+{
+ "schema" : "http://example.org/specification/v2.0.0",
+ "metadata" : { },
+ "global" : { },
+ "credentials" : { },
+ "components" : {
+ "slider-appmaster" : {
+ "yarn.memory" : "1024",
+ "yarn.vcores" : "1",
+ "yarn.component.instances" : "1",
+ "yarn.component.instances.requesting" : "0",
+ "yarn.component.instances.actual" : "1",
+ "yarn.component.instances.releasing" : "0",
+ "yarn.component.instances.failed" : "0",
+ "yarn.component.instances.completed" : "0",
+ "yarn.component.instances.started" : "1"</p>
+<div class="codehilite"><pre><span class="p">}</span>
</pre></div>
+<p>}</p>
+<p>}
+```</p>
<h2 id="liveliveness"><code>live/liveness</code><a class="headerlink"
href="#liveliveness" title="Permanent link">¶</a></h2>
<p>The liveness URL returns a JSON structure on the liveness of the
application as perceived by Slider itself.</p>
<p>See
<code>org.apache.slider.api.types.ApplicationLivenessInformation</code></p>
-<div class="codehilite"><pre><span class="p">{</span>
- "<span class="n">allRequestsSatisfied</span>"<span
class="p">:</span> <span class="n">true</span><span class="p">,</span>
- "<span class="n">requestsOutstanding</span>"<span
class="p">:</span> 0
-<span class="p">}</span>
-</pre></div>
-
-
+<p><code>{
+ "allRequestsSatisfied": true,
+ "requestsOutstanding": 0
+}</code></p>
<p>Its initial/basic form counts the number of outstanding container
requests.</p>
<p>This could be extended in future with more criteria, such as the minimum
number/
percentage of desired containers of each component type have been allocated
Modified: websites/staging/slider/trunk/content/docs/getting_started.html
==============================================================================
--- websites/staging/slider/trunk/content/docs/getting_started.html (original)
+++ websites/staging/slider/trunk/content/docs/getting_started.html Thu Nov 19
18:42:30 2015
@@ -236,7 +236,7 @@ h2:hover > .headerlink, h3:hover > .head
<p>Required Services: HDFS, YARN and ZooKeeper</p>
</li>
<li>
-<p>Oracle JDK 1.6 (64-bit)</p>
+<p>Oracle JDK 1.7 (64-bit)</p>
</li>
<li>
<p>Python 2.6</p>
@@ -266,6 +266,18 @@ In <code>yarn-site.xml</code> make the f
</tr>
</table>
+<p>Example</p>
+<div class="codehilite"><pre><span class="nt"><property></span>
+ <span
class="nt"><name></span>yarn.scheduler.minimum-allocation-mb<span
class="nt"></name></span>
+ <span class="nt"><value></span>256<span
class="nt"></value></span>
+<span class="nt"></property></span>
+<span class="nt"><property></span>
+ <span
class="nt"><name></span>yarn.nodemanager.delete.debug-delay-sec<span
class="nt"></name></span>
+ <span class="nt"><value></span>3600<span
class="nt"></value></span>
+<span class="nt"></property></span>
+</pre></div>
+
+
<p>There are other options detailed in the Troubleshooting file available <a
href="/docs/troubleshooting.html">here</a>.</p>
<h2 id="download-slider-packages"><a name="download"></a>Download Slider
Packages<a class="headerlink" href="#download-slider-packages" title="Permanent
link">¶</a></h2>
<p><em>You can build it as described below.</em></p>
Modified: websites/staging/slider/trunk/content/docs/security.html
==============================================================================
--- websites/staging/slider/trunk/content/docs/security.html (original)
+++ websites/staging/slider/trunk/content/docs/security.html Thu Nov 19
18:42:30 2015
@@ -221,7 +221,7 @@ listed at the bottom. </p>
in the clusters <em>MUST</em> have read/write access to these files. This
can be
done with a shortname that matches that of the user, or by requesting
that Slider create a directory with group write permissions -and using LDAP
- to indentify the application principals as members of the same group
+ to identify the application principals as members of the same group
as the user.</li>
</ol>
<h2 id="security-requirements">Security Requirements<a class="headerlink"
href="#security-requirements" title="Permanent link">¶</a></h2>
@@ -254,7 +254,7 @@ listed at the bottom. </p>
<li>Kerberos is running and that HDFS and YARN are running Kerberized.</li>
<li>LDAP cannot be assumed. </li>
<li>Credentials needed for the application can be pushed out into the local
filesystems of
- the of the worker nodes via some external mechanism (e.g. scp), and
protected by
+ the of the worker nodes via some external mechanism (e.g. <code>scp</code>),
and protected by
the access permissions of the native filesystem. Any user with access to
these
credentials is considered to have been granted such rights.</li>
<li>These credentials can outlive the duration of the application
instances</li>
@@ -267,7 +267,7 @@ kerberos identities.</li>
<li>The user is expected to have their own Kerberos principal, and have used
<code>kinit</code>
or equivalent to authenticate with Kerberos and gain a (time-bounded)
TGT</li>
<li>The user is expected to have principals for every host in the cluster of
the form
- username/hostname@REALM for component aunthentication. The AM
authentication requirements
+ username/hostname@REALM for component authentication. The AM authentication
requirements
can be satisfied with a non-host based principal (username@REALM).</li>
<li>Separate keytabs should be generated for the AM, which contains the AM
login principal, and the service components, which contain all the service
principals. The keytabs can be manually distributed
to all the nodes in the cluster with read access permissions to the user, or
the user may elect to leverage the Slider keytab distribution mechanism.</li>
@@ -276,7 +276,7 @@ kerberos identities.</li>
</ol>
<p>The Slider Client will talk to HDFS and YARN authenticating itself with the
TGT,
talking to the YARN and HDFS principals which it has been configured to
expect.</p>
-<p>This can be done as described in [Client Configuration]
(/docs/client-configuration.html) on the command line as</p>
+<p>This can be done as described in <a
href="/docs/client-configuration.html">Client Configuration</a> on the command
line as</p>
<div class="codehilite"><pre> <span class="o">-</span><span class="n">D</span>
<span class="n">yarn</span><span class="p">.</span><span
class="n">resourcemanager</span><span class="p">.</span><span
class="n">principal</span><span class="p">=</span><span
class="n">yarn</span><span class="o">/</span><span class="n">master</span><span
class="p">@</span><span class="n">LOCAL</span>
<span class="o">-</span><span class="n">D</span> <span
class="n">dfs</span><span class="p">.</span><span
class="n">namenode</span><span class="p">.</span><span
class="n">kerberos</span><span class="p">.</span><span
class="n">principal</span><span class="p">=</span><span
class="n">hdfs</span><span class="o">/</span><span class="n">master</span><span
class="p">@</span><span class="n">LOCAL</span>
</pre></div>
@@ -287,9 +287,9 @@ user <code>r-x</code> for the group and
<p>It will then deploy the AM, which will (somehow? for how long?) retain the
access
rights of the user that created the cluster.</p>
<p>The Application Master will read in the JSON cluster specification file,
and instantiate the
-relevant number of componentss. </p>
+relevant number of components. </p>
<h3 id="the-keytab-distributionaccess-options">The Keytab distribution/access
Options<a class="headerlink" href="#the-keytab-distributionaccess-options"
title="Permanent link">¶</a></h3>
-<p>Rather than relying on delegation token based authentication mechanisms,
the AM leverages keytab files for obtaining the principals to authenticate to
the configured cluster KDC. In order to perform this login the AM requires
access to a keytab file that contains the principal representing the user
identity to be associated with the launched application instance (e.g. in an
HBase installation you may elect to leverage the <code>hbase</code> principal
for this purpose). There are two mechanisms supported for keytab access and/or
distribution:</p>
+<p>Rather than relying on delegation token based authentication mechanisms,
the AM leverages keytab files for obtaining the principals to authenticate to
the configured cluster KDC. In order to perform this login the AM requires
access to a keytab file that contains the principal representing the user
identity to be associated with the launched application instance (e.g. in an
HBase installation you may elect to use the <code>hbase</code> principal for
this purpose). There are two mechanisms supported for keytab access and/or
distribution:</p>
<h4 id="local-keytab-file-access">Local Keytab file access:<a
class="headerlink" href="#local-keytab-file-access" title="Permanent
link">¶</a></h4>
<p>An application deployer may choose to pre-distribute the keytab files
required to the Node Manager (NM) hosts in a Yarn cluster. In that instance the
appConfig.json requires the following properties:</p>
<div class="codehilite"><pre><span class="p">.</span> <span class="p">.</span>
<span class="p">.</span>
@@ -349,7 +349,7 @@ specified relative to the <code>$AGENT_W
<li>The value specified for a <code>slider.keytab.principal.name</code>
property. </li>
</ul>
<h4 id="slider-client-keytab-installation">Slider Client Keytab
installation:<a class="headerlink" href="#slider-client-keytab-installation"
title="Permanent link">¶</a></h4>
-<p>The Slider client can be leveraged to install keytab files individually
into a designated
+<p>The Slider client can be used to install keytab files individually into a
designated
keytab HDFS folder. The format of the command is:</p>
<div class="codehilite"><pre><span class="n">slider</span> <span
class="n">install</span><span class="o">-</span><span class="n">keytab</span>
â<span class="n">keytab</span> <span class="o"><</span><span
class="n">path</span> <span class="n">to</span> <span class="n">keytab</span>
<span class="n">on</span> <span class="n">local</span> <span
class="n">file</span> <span class="n">system</span><span class="o">></span>
â<span class="n">folder</span> <span class="o"><</span><span
class="n">name</span> <span class="n">of</span> <span class="n">HDFS</span>
<span class="n">folder</span> <span class="n">to</span> <span
class="n">store</span> <span class="n">keytab</span><span class="o">></span>
<span class="p">[</span>â<span class="n">overwrite</span><span
class="p">]</span>
</pre></div>
@@ -369,17 +369,17 @@ The command can be used to upload keytab
<p>Subsequently, the associated hbase-site configuration properties would
be:</p>
<div class="codehilite"><pre>"global": {
. . .
- "site.hbase-site.hbase.master.kerberos.principal":
"hbase/[email protected]",
- "site.hbase-site.hbase.master.keytab.file": "<span
class="cp">${</span><span class="n">AGENT_WORK_ROOT</span><span
class="cp">}</span>/keytabs/hbase.service.keytab",
- . . .
-}
+ "site.hbase-site.hbase.master.kerberos.principal":
"hbase/[email protected]",
+ "site.hbase-site.hbase.master.keytab.file": "<span
class="cp">${</span><span class="n">AGENT_WORK_ROOT</span><span
class="cp">}</span>/keytabs/hbase.service.keytab",
+ . . .
+ }
"components": {
- "slider-appmaster": {
- "jvm.heapsize": "256M",
- "slider.hdfs.keytab.dir": ".slider/keytabs/HBASE",
- "slider.am.login.keytab.name":
"hbase.headless.keytab"
- `slider.keytab.principal.name` : `hbase"
- }
+ "slider-appmaster": {
+ "jvm.heapsize": "256M",
+ "slider.hdfs.keytab.dir": ".slider/keytabs/HBASE",
+ "slider.am.login.keytab.name":
"hbase.headless.keytab"
+ `slider.keytab.principal.name` : `hbase"
+ }
}
</pre></div>
@@ -461,7 +461,7 @@ in the component specific configuration
This property is specified in the appConfig file's global section (with the
"site.myapp-site" prefix), and is referenced here to indicate to Slider which
application property provides the store password.</li>
</ul>
<h3 id="specifying-a-keystoretruststore-credential-provider-alias">Specifying
a keystore/truststore Credential Provider alias<a class="headerlink"
href="#specifying-a-keystoretruststore-credential-provider-alias"
title="Permanent link">¶</a></h3>
-<p>Applications that utilize the Credenfial Provider API to retrieve
application passwords can specify the following configuration:</p>
+<p>Applications that utilize the Credential Provider API to retrieve
application passwords can specify the following configuration:</p>
<ul>
<li>Indicate the credential storage path in the <code>credentials</code>
section of the app configuration file:<div class="codehilite"><pre>
"credentials": {
"jceks://hdfs/user/<span class="cp">${</span><span
class="n">USER</span><span class="cp">}</span>/myapp.jceks":
["app_component.keystore.password.alias"]
@@ -473,7 +473,7 @@ This property is specified in the appCon
</ul>
<p>If you specify a list of aliases and are making use of the Slider CLI for
application deployment, you will be prompted to enter a value for the passwords
specified if no password matching a configured alias is found in the credential
store. However, any mechanism available for pre-populating the credential
store may be utilized.</p>
<ul>
-<li>Reference the alias to use for securing the keystore/truststore in the
component's configuraton section:<div class="codehilite"><pre>"<span
class="n">APP_COMPONENT</span>"<span class="p">:</span> <span
class="p">{</span>
+<li>Reference the alias to use for securing the keystore/truststore in the
component's configuration section:<div class="codehilite"><pre>"<span
class="n">APP_COMPONENT</span>"<span class="p">:</span> <span
class="p">{</span>
"<span class="n">slider</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">security</span><span class="p">.</span><span
class="n">stores</span><span class="p">.</span><span
class="n">required</span>"<span class="p">:</span> "<span
class="n">true</span>"<span class="p">,</span>
"<span class="n">slider</span><span class="p">.</span><span
class="n">component</span><span class="p">.</span><span
class="n">keystore</span><span class="p">.</span><span
class="n">credential</span><span class="p">.</span><span
class="n">alias</span><span class="p">.</span><span
class="n">property</span>"<span class="p">:</span> "<span
class="n">app_component</span><span class="p">.</span><span
class="n">keystore</span><span class="p">.</span><span
class="n">password</span><span class="p">.</span><span
class="n">alias</span>"
<span class="p">}</span>
@@ -482,21 +482,25 @@ This property is specified in the appCon
</li>
</ul>
-<p>At runtime, Slider will read the credential mapped to the alias (in this
case, "app_component.keystore.password.alias"), and leverage the password
stored to secure the generated keystore.</p>
+<p>At runtime, Slider will read the credential mapped to the alias (in this
case, <code>"app_component.keystore.password.alias"</code>), and leverage the
password stored to secure the generated keystore.</p>
<h2 id="important-java-cryptography-package">Important: Java Cryptography
Package<a class="headerlink" href="#important-java-cryptography-package"
title="Permanent link">¶</a></h2>
-<p>When trying to talk to a secure, cluster you may see the message:</p>
+<p>When trying to talk to a secure cluster you may see the message:</p>
<div class="codehilite"><pre><span class="n">No</span> <span
class="n">valid</span> <span class="n">credentials</span> <span
class="n">provided</span> <span class="p">(</span><span
class="n">Mechanism</span> <span class="n">level</span><span class="p">:</span>
<span class="n">Illegal</span> <span class="n">key</span> <span
class="nb">size</span><span class="p">)]</span>
</pre></div>
-<p>or
- No valid credentials provided (Mechanism level: Failed to find any
Kerberos tgt)</p>
+<p>or</p>
+<div class="codehilite"><pre><span class="n">No</span> <span
class="n">valid</span> <span class="n">credentials</span> <span
class="n">provided</span> <span class="p">(</span><span
class="n">Mechanism</span> <span class="n">level</span><span class="p">:</span>
<span class="n">Failed</span> <span class="n">to</span> <span
class="nb">find</span> <span class="n">any</span> <span
class="n">Kerberos</span> <span class="n">tgt</span><span class="p">)</span>
+</pre></div>
+
+
<p>This means that the JRE does not have the extended cryptography package
needed to work with the keys that Kerberos needs. This must be downloaded
from Oracle (or other supplier of the JVM) and installed according to
-its accompanying instructions.</p>
+the accompanying instructions.</p>
<h2 id="useful-links">Useful Links<a class="headerlink" href="#useful-links"
title="Permanent link">¶</a></h2>
<ol>
+<li><a
href="https://www.gitbook.com/book/steveloughran/kerberos_and_hadoop/details">Hadoop
and Kerberos: The Madness Beyond the Gate</a></li>
<li><a
href="http://hortonworks.com/wp-content/uploads/2011/10/security-design_withCover-1.pdf">Adding
Security to Apache Hadoop</a></li>
<li><a
href="http://hortonworks.com/blog/the-role-of-delegation-tokens-in-apache-hadoop-security/">The
Role of Delegation Tokens in Apache Hadoop Security</a></li>
<li><a href="http://hbase.apache.org/book/security.html">Chapter 8. Secure
Apache HBase</a></li>
Modified: websites/staging/slider/trunk/content/docs/troubleshooting.html
==============================================================================
--- websites/staging/slider/trunk/content/docs/troubleshooting.html (original)
+++ websites/staging/slider/trunk/content/docs/troubleshooting.html Thu Nov 19
18:42:30 2015
@@ -201,6 +201,41 @@ h2:hover > .headerlink, h3:hover > .head
up a YARN application, with the need to have an HBase configuration
that works</p>
<h2 id="common-problems">Common problems<a class="headerlink"
href="#common-problems" title="Permanent link">¶</a></h2>
+<h3
id="not-all-the-containers-start-but-whenever-you-kill-one-another-one-comes-up">Not
all the containers start -but whenever you kill one, another one comes up.<a
class="headerlink"
href="#not-all-the-containers-start-but-whenever-you-kill-one-another-one-comes-up"
title="Permanent link">¶</a></h3>
+<p>This is often caused by YARN not having enough capacity in the cluster to
start
+up the requested set of containers. The AM has submitted a list of container
+requests to YARN, but only when an existing container is released or killed
+is one of the outstanding requests granted.</p>
+<p>Fix #1: Ask for smaller containers</p>
+<p>edit the <code>yarn.memory</code> option for roles to be smaller: set it 64
for a smaller
+YARN allocation. <em>This does not affect the actual heap size of the
+application component deployed</em></p>
+<p>Fix #2: Tell YARN to be less strict about memory consumption</p>
+<p>Here are the properties in <code>yarn-site.xml</code> which we set to allow
YARN
+to schedule more role instances than it nominally has room for.</p>
+<div class="codehilite"><pre><span class="nt"><property></span>
+ <span
class="nt"><name></span>yarn.scheduler.minimum-allocation-mb<span
class="nt"></name></span>
+ <span class="nt"><value></span>128<span
class="nt"></value></span>
+<span class="nt"></property></span>
+<span class="nt"><property></span>
+ <span class="nt"><description></span>Whether physical memory limits
will be enforced for
+ containers.
+ <span class="nt"></description></span>
+ <span class="nt"><name></span>yarn.nodemanager.pmem-check-enabled<span
class="nt"></name></span>
+ <span class="nt"><value></span>false<span
class="nt"></value></span>
+<span class="nt"></property></span>
+<span class="c"><!-- we really don't want checking here--></span>
+<span class="nt"><property></span>
+ <span class="nt"><name></span>yarn.nodemanager.vmem-check-enabled<span
class="nt"></name></span>
+ <span class="nt"><value></span>false<span
class="nt"></value></span>
+<span class="nt"></property></span>
+</pre></div>
+
+
+<p><em>Important</em> In a real cluster, the minimum size of a an allocation
should be larger, such
+as <code>256</code>, to stop the RM being overloaded. When the PMEM and VMEM
sizes are enforced </p>
+<h3
id="the-complete-instance-never-comes-up-some-containers-are-outstanding">The
complete instance never comes up -some containers are outstanding<a
class="headerlink"
href="#the-complete-instance-never-comes-up-some-containers-are-outstanding"
title="Permanent link">¶</a></h3>
+<p>This means that there isn't enough space in the cluster </p>
<h3
id="slider-instances-not-being-able-to-create-registry-paths-on-secure-clusters">Slider
instances not being able to create registry paths on secure clusters<a
class="headerlink"
href="#slider-instances-not-being-able-to-create-registry-paths-on-secure-clusters"
title="Permanent link">¶</a></h3>
<p>This feature requires the YARN Resource Manager to do the setup securely of
the user's path in the registry</p>
@@ -234,39 +269,6 @@ you may be able to grab the logs from it
Note: the URL depends on yarn.log.server.url being properly configured.</p>
<p>It is from those logs that the cause of the problem -because they are the
actual
output of the actual application which Slider is trying to deploy.</p>
-<h3
id="not-all-the-containers-start-but-whenever-you-kill-one-another-one-comes-up">Not
all the containers start -but whenever you kill one, another one comes up.<a
class="headerlink"
href="#not-all-the-containers-start-but-whenever-you-kill-one-another-one-comes-up"
title="Permanent link">¶</a></h3>
-<p>This is often caused by YARN not having enough capacity in the cluster to
start
-up the requested set of containers. The AM has submitted a list of container
-requests to YARN, but only when an existing container is released or killed
-is one of the outstanding requests granted.</p>
-<p>Fix #1: Ask for smaller containers</p>
-<p>edit the <code>yarn.memory</code> option for roles to be smaller: set it 64
for a smaller
-YARN allocation. <em>This does not affect the actual heap size of the
-application component deployed</em></p>
-<p>Fix #2: Tell YARN to be less strict about memory consumption</p>
-<p>Here are the properties in <code>yarn-site.xml</code> which we set to allow
YARN
-to schedule more role instances than it nominally has room for.</p>
-<div class="codehilite"><pre><span class="nt"><property></span>
- <span
class="nt"><name></span>yarn.scheduler.minimum-allocation-mb<span
class="nt"></name></span>
- <span class="nt"><value></span>1<span class="nt"></value></span>
-<span class="nt"></property></span>
-<span class="nt"><property></span>
- <span class="nt"><description></span>Whether physical memory limits
will be enforced for
- containers.
- <span class="nt"></description></span>
- <span class="nt"><name></span>yarn.nodemanager.pmem-check-enabled<span
class="nt"></name></span>
- <span class="nt"><value></span>false<span
class="nt"></value></span>
-<span class="nt"></property></span>
-<span class="c"><!-- we really don't want checking here--></span>
-<span class="nt"><property></span>
- <span class="nt"><name></span>yarn.nodemanager.vmem-check-enabled<span
class="nt"></name></span>
- <span class="nt"><value></span>false<span
class="nt"></value></span>
-<span class="nt"></property></span>
-</pre></div>
-
-
-<p>If you create too many instances, your hosts will start swapping and
-performance will collapse -we do not recommend using this in production.</p>
<h3 id="configuring-yarn-for-better-debugging">Configuring YARN for better
debugging<a class="headerlink" href="#configuring-yarn-for-better-debugging"
title="Permanent link">¶</a></h3>
<p>One configuration to aid debugging is tell the nodemanagers to
keep data for a short period after containers finish</p>
@@ -306,8 +308,9 @@ The syntax for using the wrapper is:</p>
<p>where hbasesliderapp is the name of Slider HBase instance.
The script would retrieve hbase-site.xml and run HBase shell command.</p>
-<p>You can issue the following command to see supported options:
- ./hbase-slider</p>
+<p>You can issue the following command to see supported options:</p>
+<div class="codehilite"><pre><span class="o">./</span><span
class="n">hbase</span><span class="o">-</span><span class="n">slider</span>
+</pre></div>
</div>
<div id="footer">