Author: elserj
Date: Sun Jan 10 19:22:13 2016
New Revision: 1723954

URL: http://svn.apache.org/viewvc?rev=1723954&view=rev
Log:
More grammar/nit-picky fixes to the placement doc.

Modified:
    incubator/slider/site/trunk/content/docs/placement.md

Modified: incubator/slider/site/trunk/content/docs/placement.md
URL: 
http://svn.apache.org/viewvc/incubator/slider/site/trunk/content/docs/placement.md?rev=1723954&r1=1723953&r2=1723954&view=diff
==============================================================================
--- incubator/slider/site/trunk/content/docs/placement.md (original)
+++ incubator/slider/site/trunk/content/docs/placement.md Sun Jan 10 19:22:13 
2016
@@ -38,7 +38,7 @@ the cluster. For details on the implemen
 
 A Slider Application Instance consists of the Application Master —Slider's 
code— and
 the components requested in the `resources.json` file. For each component,
-Slider requests a new YARN Container, which is then allocated in the Hadoo 
cluster
+Slider requests a new YARN Container, which is then allocated in the Hadoop 
cluster
 by YARN. The Slider application starts the component within the container,
 and monitors its lifecycle.
 
@@ -59,7 +59,7 @@ there is no capacity on nodes with the l
 —even if there is space elsewhere.
 
 1. **Named Nodes/Racks**. Slider lists the explicit hosts and racks upon which
-a component can be allocated. If there is no capacity in these nodes locations,
+a component can be allocated. If there is no capacity in these locations,
 the request will be unsatisfied —even if there is space elsewhere.
 
 1. **Named Nodes/Racks with relaxation**. Slider lists the explicit hosts and
@@ -99,7 +99,7 @@ placement ensures that there is no confl
 prevent any conflict across component types, or between Slider/YARN 
applications.
 
 
-Anothe need is history-based placement: re-instantiating component instances on
+Another need is history-based placement: re-instantiating component instances 
on
 the machine(s) on which they last ran.
 This can deliver performance and startup advantages, or, for
 some applications, essential to recover data persisted on the server.
@@ -127,12 +127,12 @@ which instances were created, rather tha
 Servers were previously running on Node 17". On restart Slider can simply 
request
 one instance of a Region Server on a specific node, leaving the other instance
 to be arbitrarily deployed by YARN. This strategy should help reduce the 
affinity
-in the component deployment, so increase their resilience to failure.
+in the component deployment, increasing their resilience to failure.
 
 1. There is no need to make sophisticated choices on which nodes to request
 re-assignment —such as recording the amount of data persisted by a previous
 instance and prioritizing nodes based on such data. More succinctly 'the
-only priority needed when asking for nodes is *ask for the most recently used*.
+only priority needed when asking for nodes is *ask for the most recently 
used*'.
 
 1. If a component instance fails on a specific node, asking for a new 
container on
 that same node is a valid first attempt at a recovery strategy.
@@ -144,9 +144,10 @@ that same node is a valid first attempt
 The policy for placing a component can be set in the resources.json file,
 via the `yarn.component.placement.policy` field.
 
-Here are the currently supported placement policies, with their
+Here are the currently supported placement policies
 
 Note that
+
 1. These are orthogonal to labels: when "anywhere" is used, it means
 "anywhere that the label expression permits".
 1. The numbers are (currently) part of a bit-mask. Other combinations may
@@ -185,7 +186,7 @@ failed, then the request will never be s
 New instances (i.e. ones for which there is no historical placement)
 will be requested anywhere.
 
-Note: As of Jan 2016 tThere is no anti-affinity placement when expanding
+Note: As of Jan 2016, there is no anti-affinity placement when expanding
 strict placements 
[SLIDER-980](https://issues.apache.org/jira/browse/SLIDER-980).
 It's possible to work around this by requesting the initial set of instances
 using anti-affinity, then editing resources.json to switch to strict placements


Reply via email to