Removes trailing whitespaces in docs

This closes: #39
Review: https://github.com/apache/incubator-myriad/pull/39


Project: http://git-wip-us.apache.org/repos/asf/incubator-myriad/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-myriad/commit/ea3d5ede
Tree: http://git-wip-us.apache.org/repos/asf/incubator-myriad/tree/ea3d5ede
Diff: http://git-wip-us.apache.org/repos/asf/incubator-myriad/diff/ea3d5ede

Branch: refs/heads/master
Commit: ea3d5ede1a38a1002f7bfd5ae2e6401e4c031cac
Parents: 46307ea
Author: Zhongyue Luo <zhongyue....@gmail.com>
Authored: Tue Nov 10 09:59:46 2015 +0800
Committer: smarella <smare...@maprtech.com>
Committed: Tue Nov 10 10:35:44 2015 -0800

----------------------------------------------------------------------
 docs/API.md                                     | 72 ++++++++++----------
 docs/config-jobhistoryserver-services.md        |  2 +-
 docs/control-plane-algorithm.md                 |  2 +-
 docs/getting-started.md                         | 16 ++---
 docs/ha-config.md                               |  8 +--
 docs/how-it-works.md                            | 10 +--
 docs/install-overview.md                        |  4 +-
 docs/myriad-configuration.md                    |  4 +-
 docs/myriad-dashboard.md                        |  6 +-
 docs/myriad-dev.md                              | 10 +--
 docs/myriad-fine-grained-scaling.md             |  4 +-
 docs/myriad-overview.md                         |  7 +-
 .../myriad-remote-distribution-configuration.md | 10 +--
 docs/myriad-scheduler-architecture.md           |  2 +-
 docs/node-manager-profiles.md                   |  2 +-
 docs/sample-yarn-site.md                        |  6 +-
 docs/vagrant.md                                 |  6 +-
 17 files changed, 85 insertions(+), 86 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/API.md
----------------------------------------------------------------------
diff --git a/docs/API.md b/docs/API.md
index f732210..fc2e343 100644
--- a/docs/API.md
+++ b/docs/API.md
@@ -15,7 +15,7 @@ API | HTTP Method | URI | Description |
 
 
 
-  
+
 ## Cluster API
 
 The Cluster REST API uses the PUT /api/cluster/flexup and flexdown HTTP method 
and URI to expand and shrink the cluster size.
@@ -24,7 +24,7 @@ The Cluster REST API uses the PUT /api/cluster/flexup and 
flexdown HTTP method a
 
 ```
 PUT /api/cluster/flexup      // Expands the size of the YARN cluster.
- 
+
 PUT /api/cluster/flexdown    // Shrinks the size of the YARN cluster.
 ```
 
@@ -56,25 +56,25 @@ constraints | (Optional) Array definition for a single 
constraint using the LIKE
 Curl request example to flexup two instances with the profile set to small:
 
 ```
-curl -X PUT http://10.10.100.19:8192/api/cluster/flexup 
-    -d instances=2 
+curl -X PUT http://10.10.100.19:8192/api/cluster/flexup
+    -d instances=2
     -d profile=small
 ```
 
 Curl request example to flexdown one instance with the profile set to small:
 
 ```
-curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown 
-    -d instances=1 
+curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown
+    -d instances=1
     -d profile=small
 ```
 
 Curl request example to launch two (2) Node Managers with profile set to large 
only on specific hosts, host-120 through host-129:
 
 ```
-curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown 
-    -d instances=2 
-    -d profile=large 
+curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown
+    -d instances=2
+    -d profile=large
     -d constraints=["hostname LIKE host-12[0-9].example.com"]
 ```
 
@@ -115,7 +115,7 @@ Launches a Node Manager with the profile set to medium on 
any host in the Mesos
 ```
 {profile: "medium", instances:1}
 ```
- 
+
 Launches a Node Manager with the profile set to small on any host in the Mesos 
cluster:
 
 ```
@@ -136,7 +136,7 @@ Launches four (4) Node Managers with profile set to large 
only on specific hosts
 
 ```
 {
-    "instances":4, 
+    "instances":4,
     "profile": "large",
     "constraints": ["hostname LIKE host-12[0-9].example.com"]
 }
@@ -146,7 +146,7 @@ Launches two (2) Node Managers with profile set to zero 
only on hosts sharing a
 
 ```
 {
-    "instances":2, 
+    "instances":2,
     "profile": "zero",
     "constraints": ["hdfs LIKE true"]
 }
@@ -169,10 +169,10 @@ The Cluster REST API uses the PUT /api/cluster/flexup and 
flexdown HTTP method a
 
 ```
 PUT /api/cluster/flexup      // Expands the size of the YARN cluster.
- 
+
 PUT /api/cluster/flexdown    // Shrinks the size of the YARN cluster.
 ```
- 
+
 Parameters include:
 
 
@@ -181,10 +181,10 @@ Parameter | Description |
 profile          | (Required) If a profile value is not specified, the API 
returns an error. The profile indicates the amount of resources (CPU or memory) 
a Node Manager should advertise to the Resource Manager. Default profiles: 
zero, small, medium, large. These default profiles (zero, small, medium, and 
large) are defined in the myriad-config-default.yml file. The resources 
associated with these default profiles can be modified; additionally, new 
profiles can be defined. |
 instances| (Required) The number of Node Managers instances to launch. Each 
Node Manager instance advertises the amount of resources specified in the 
profile. The value is a number in the range of zero (0) to the number of Mesos 
slave nodes.|
 constraints    | (Optional) Array definition for a single constraint using the 
LIKE operator constraint format: <mesos_slave_attribute|hostname> LIKE 
<value_regex>. The hostname constraint is used to launch Node Managerss on 
nodes whose hostname matches the regex passed in as value. See common Mesos 
slave attributes (http://mesos.apache.org/documentation/attributes-resources) 
for more information. |
- 
- 
+
+
 ### Syntax
- 
+
  ```
  <resource_manager_host>:8192/api/cluster/flexup
     profile=<zero|small|medium|large>
@@ -202,28 +202,28 @@ constraints       | (Optional) Array definition for a 
single constraint using the LIKE
 Curl request example to flexup two instances with the profile set to small:
 
 ```
-curl -X PUT http://10.10.100.19:8192/api/cluster/flexup 
-    -d instances=2 
+curl -X PUT http://10.10.100.19:8192/api/cluster/flexup
+    -d instances=2
     -d profile=small
 ```
- 
+
 Curl request example to flexdown one instance with the profile set to small:
 
 ```
-curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown 
-    -d instances=1 
+curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown
+    -d instances=1
     -d profile=small
 ```
- 
+
 Curl request example to launch two (2) Node Managers with profile set to large 
only on specific hosts, host-120 through host-129:
 
 ```
-curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown 
-    -d instances=2 
-    -d profile=large 
+curl -X PUT http://10.10.100.19:8192/api/cluster/flexdown
+    -d instances=2
+    -d profile=large
     -d constraints=["hostname LIKE host-12[0-9].example.com"]
 ```
- 
+
 Request header to flexup:
 
 ```
@@ -261,7 +261,7 @@ Launches a Node Manager with the profile set to medium on 
any host in the Mesos
 ```
 {profile: "medium", instances:1}
 ```
- 
+
 Launches a Node Manager with the profile set to small on any host in the Mesos 
cluster:
 
 ```
@@ -282,7 +282,7 @@ Launches four (4) Node Managers with profile set to large 
only on specific hosts
 
 ```
 {
-    "instances":4, 
+    "instances":4,
     "profile": "large",
     "constraints": ["hostname LIKE host-12[0-9].example.com"]
 }
@@ -292,7 +292,7 @@ Launches two (2) Node Managers with profile set to zero 
only on hosts sharing a
 
 ```
 {
-    "instances":2, 
+    "instances":2,
     "profile": "zero",
     "constraints": ["hdfs LIKE true"]
 }
@@ -331,13 +331,13 @@ URL request example:
 ```
 http://10.10.100.19:8192/api/config
 ```
- 
+
 Curl request example:
 
 ```
 curl http://10.10.100.19:8192/api/config | python -m json.tool
 ```
- 
+
 Request header:
 
 ```
@@ -374,7 +374,7 @@ Accept-Language: en-US,en;q=0.8
         },
         "nodeManagerUri": {
             "present": false
-        }, 
+        },
     "nativeLibrary": "/usr/local/lib/libmesos.so",
     "nmInstances": {
         "medium": 1
@@ -394,7 +394,7 @@ Accept-Language: en-US,en;q=0.8
         }
     }
     "profiles": {
- 
+
         "large": {
             "cpu": "10",
             "mem": "12288"
@@ -456,7 +456,7 @@ Curl request example:
 ```
 curl http://10.10.100.19:8192/api/state | python -m json.tool
 ```
- 
+
 Request header:
 
 ```
@@ -469,7 +469,7 @@ User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) 
AppleWebKit/537.36 (
 Accept-Encoding: gzip, deflate, sdch
 Accept-Language: en-US,en;q=0.8
 ```
- 
+
 ### Response Example
 
 ```

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/config-jobhistoryserver-services.md
----------------------------------------------------------------------
diff --git a/docs/config-jobhistoryserver-services.md 
b/docs/config-jobhistoryserver-services.md
index 0aa042e..f30b858 100644
--- a/docs/config-jobhistoryserver-services.md
+++ b/docs/config-jobhistoryserver-services.md
@@ -24,7 +24,7 @@ services:
         maxInstances:       # If defined maximum number of instances this 
service can have per myriad framework
         command:            # Command to be executed
         serviceOptsName:    # Name of the env. variable that may need to be 
set for the service that will include env. settings
- 
+
 The following example defines the parameter for JobHistoryServer and 
TimeLineServer tasks:
 <!-- Define services as a task -->
 services:

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/control-plane-algorithm.md
----------------------------------------------------------------------
diff --git a/docs/control-plane-algorithm.md b/docs/control-plane-algorithm.md
index 140cede..18dfa8a 100644
--- a/docs/control-plane-algorithm.md
+++ b/docs/control-plane-algorithm.md
@@ -5,6 +5,6 @@ _This is a working draft_
 Notes:
 - Each registered YARN ResourceManager will have a minimum quota associated 
with them. Quota can be expressed in terms of CPU or Memory. OR Minimum number 
of NodeManagers which are of profile X or higher.
 - Myriad will monitor the registered ResourceManagers to determine if the 
ResourceManager needs more resources or if the ResourceManager have excess 
unused resources. And, then it will use this information to flex-up or 
flex-down the NodeManager, either horizontall or vertically.
-- When making a decision to flex-down NodeManagers, algorithm will take into 
consideration the kind of containers that are currently running under the 
chosen NodeManager. 
+- When making a decision to flex-down NodeManagers, algorithm will take into 
consideration the kind of containers that are currently running under the 
chosen NodeManager.
   - If NodeManager is running a AppMaster container, it will be skipped, as 
killing it will lead to a killing of all it's child containers. Although, this 
needs to be reconsidered once 
[YARN-1489](https://issues.apache.org/jira/browse/YARN-1489) gets resolved.
   - A NodeManger should be chosen for flex-down, if reconfiguring and 
restarting it has minimum impact compared to other NodeManagers. This strategy 
can be reconsidered once 
[YARN-1336](https://issues.apache.org/jira/browse/YARN-1336) gets resolved.

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/getting-started.md
----------------------------------------------------------------------
diff --git a/docs/getting-started.md b/docs/getting-started.md
index 064d814..cda9d25 100644
--- a/docs/getting-started.md
+++ b/docs/getting-started.md
@@ -23,8 +23,8 @@ Myriad | 8192 | http://<IP address>:8192. For example: 
http://<ip address>:8192/
 
 ## Launching Resource Manager ##
 
-If you are using Marathon, launch Marathon and run an initial Resource Manager 
application. The Resource Manager can be launched or stopped from either the 
command line or the Marathon UI. 
- 
+If you are using Marathon, launch Marathon and run an initial Resource Manager 
application. The Resource Manager can be launched or stopped from either the 
command line or the Marathon UI.
+
 ### Launching from the Command Line ###
 
 
@@ -33,13 +33,13 @@ To start the Resource Manager, run the YARN daemon from the 
command line:
 ```
 yarn-daemon.sh start resourcemanager
 ```
- 
+
 To shut down he Resource Manager, run the YARN daemon from the command line:
 
 ```
 yarn-daemon.sh stop resourcemanager
 ```
- 
+
 ### Launching from Marathon ###
 
 Alternatively, start and stop Myriad from the Marathon UI. See Marathon: 
Application Basics for more information. For example, create an application to 
start the Resource Manager:
@@ -47,7 +47,7 @@ Alternatively, start and stop Myriad from the Marathon UI. 
See Marathon: Applica
 ```
 cd hadoop-2.7.0/sbin && yarn-daemon.sh start resourcemanager
 ```
- 
+
 Alternatively, when launching the Resource Manager in an HA environment, 
specify value for the `yarn.resourcemanager.hostname` property. The hostname is 
the ID field specified when launching a Marathon application.
 
 To initially launch the Resource Manager from Marathon:
@@ -160,7 +160,7 @@ To flexup and flexdown instances via the Myriad UI, go to 
the Flex button on the
 ### REST API ###
 
 To scale a cluster up or down, use the Myriad Cluster API. The [Cluster 
API](API.md) provides flexup and flexdown capability that changes the size of 
one or more instances in a cluster. These predefined values are specified in 
the Myriad configuration file (**myriad-config-default.yml**). To retrieve the 
Myriad configuration and the Myriad Scheduler state, use the Configuration API 
and State API.
- 
+
 The HTTP method and URIs for flexing up and down are:
 
 ```
@@ -168,8 +168,8 @@ PUT /api/cluster/flexup
 
 PUT /api/cluster/flexdown
 ```
- 
 
 
 
- 
+
+

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/ha-config.md
----------------------------------------------------------------------
diff --git a/docs/ha-config.md b/docs/ha-config.md
index f4fc109..3e0e237 100644
--- a/docs/ha-config.md
+++ b/docs/ha-config.md
@@ -14,7 +14,7 @@ On failover, the following occurs:
 
 ## Prerequisites ##
    * Deploy mesos-master, mesos-slave (per node), zookeeper, marathon, and 
mesos-dns on your cluster.
-  
+
 ## Setting Up Mesos-DNS ##
 
 **Step 1:** Create a directory for Mesos-DNS. For example, /etc/mesos-dns.
@@ -44,7 +44,7 @@ On failover, the following occurs:
 **Note:** Add the entries at the top (in the beginning) of the 
/etc/resolv.conf file. If the entries are not at the top, Mesos-DNS may not 
work correctly.
 
 ## Configuring HA ##
-Configuring Myriad for HA involves adding HA configuration properties to the 
$YARN_HOME/etc/hadoop/yarn-site.xml file and the 
$YARN_HOME/etc/hadoop/myriad-config-default.yml file. 
+Configuring Myriad for HA involves adding HA configuration properties to the 
$YARN_HOME/etc/hadoop/yarn-site.xml file and the 
$YARN_HOME/etc/hadoop/myriad-config-default.yml file.
 
 ### Modify yarn-site.xml ###
 
@@ -72,7 +72,7 @@ To the $YARN_HOME/etc/hadoop/yarn-site.xml file, add the 
following properties:
  &lt;/property> -->
 </pre>
 
- 
+
 ### Modify myriad-config-default.yml ###
 
 To the $YARN_HOME/etc/hadoop/myriad-config-default.yml file, modify the 
following values:
@@ -84,5 +84,5 @@ haEnabled: true
 
 **Note:** The Myriad Mesos frameworkFailoverTimeout parameter is specified in 
milliseconds. This paramenter indicates to Mesos that Myriad will failover 
within this time interval.
 
- 
+
 

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/how-it-works.md
----------------------------------------------------------------------
diff --git a/docs/how-it-works.md b/docs/how-it-works.md
index 45f3ed6..be2346f 100644
--- a/docs/how-it-works.md
+++ b/docs/how-it-works.md
@@ -10,10 +10,10 @@ The following diagram shows two resource managers running 
independently which re
 
 ## Advertising Resources: Mesos Slave and YARN Node Manager
 
-The Mesos Slave and YARN’s Node Manager are processes that run on the host 
OS. Both processes advertise available resources to Mesos Master and YARN 
Resource Manager respectively. Each process can be configured to advertise a 
subset of resources. This ability is leveraged, in conjunction with Cgroups, to 
allow Mesos Slave and YARN Node Manager to co-exist on a node. 
+The Mesos Slave and YARN’s Node Manager are processes that run on the host 
OS. Both processes advertise available resources to Mesos Master and YARN 
Resource Manager respectively. Each process can be configured to advertise a 
subset of resources. This ability is leveraged, in conjunction with Cgroups, to 
allow Mesos Slave and YARN Node Manager to co-exist on a node.
 
-* The Mesos Slave processes advertises all of a node’s resources (8 CPUs, 16 
GB RAM) to the Mesos Master. 
-* The YARN Node Manager is started as a Mesos Task. This task is allotted (4 
CPUs and 8 GB RAM) and the Node Manager is configured to only advertise 3 CPUs 
and 7 GB RAM. 
+* The Mesos Slave processes advertises all of a node’s resources (8 CPUs, 16 
GB RAM) to the Mesos Master.
+* The YARN Node Manager is started as a Mesos Task. This task is allotted (4 
CPUs and 8 GB RAM) and the Node Manager is configured to only advertise 3 CPUs 
and 7 GB RAM.
 * The Node Manager is also configured to mount the YARN containers under the  
[cgroup hierarchy](cgroups.md)  which stems from a Mesos task. For example:
 
 ```bash
@@ -27,7 +27,7 @@ The following diagram showsa node running YARN NodeManager as 
a Mesos Slave task
 
 ## High Level Design
 
-One way to avoid static partitioning and to enable resource sharing when 
running two resource managers, is to let one resource manager be in absolute 
control of the datacenter’s resources. The other resource manager then 
manages a subset of resources, allocated to it through the primary resource 
manager. 
+One way to avoid static partitioning and to enable resource sharing when 
running two resource managers, is to let one resource manager be in absolute 
control of the datacenter’s resources. The other resource manager then 
manages a subset of resources, allocated to it through the primary resource 
manager.
 
 The following diagram shows a scenario where Mesos is used as the resource 
manager for the datacenter which allows both  Mesos and YARN to schedule tasks 
on any node.
 
@@ -37,7 +37,7 @@ Each node in the cluster has both daemons, Mesos Slave and 
YARN Node Manager, in
 
 The following diagram shows how Myriad launches a YARN Node Manager as a task 
under Mesos Slave:
 
-1. Myriad makes a decision to launch a new NodeManager.  
+1. Myriad makes a decision to launch a new NodeManager.
        * Myriad passes the required configuration and task launch information 
to the Mesos Master which forwards that to the Mesos Slave(s).
        * Mesos Slave launches Myriad Executor which manages the lifecycle of 
the NodeManager.
        * Myriad Executor upon launch, configures Node Manager (for example, 
specifying CPU and memory to advertise, Cgroups hierarchy, and so on) and then 
launches it. For example: In the following diagram, Node Manager is allotted 
2.5 CPU and 2.5 GB RAM.

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/install-overview.md
----------------------------------------------------------------------
diff --git a/docs/install-overview.md b/docs/install-overview.md
index 32fae6f..f01c94d 100644
--- a/docs/install-overview.md
+++ b/docs/install-overview.md
@@ -25,10 +25,10 @@
 
 * Marathon -- 8080
 * Mesos -- 5050
-* Myriad -- 8192 
+* Myriad -- 8192
 
 **Note:** If your environment has both Marathon and Spark installed on the 
same node, a conflict occurs because the default port for both is 8080. To 
resolve this conflict, change the port for one of the applications.
- 
+
 ## General Tasks ##
 
 The following is an overview of the general installation and configuration 
tasks needed for setting up and configuring Myriad:

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/myriad-configuration.md
----------------------------------------------------------------------
diff --git a/docs/myriad-configuration.md b/docs/myriad-configuration.md
index 5bdfb4a..1aa214a 100644
--- a/docs/myriad-configuration.md
+++ b/docs/myriad-configuration.md
@@ -1,6 +1,6 @@
 # Sample: myriad-config-default.yml
 
-Myriad Scheduler (the component that plugs into Resource Manager process), 
exposes configuration properties that administrators can modify. It expects a 
file **myriad-config-default.yml** to be present on the Resource Manager's java 
classpath. 
+Myriad Scheduler (the component that plugs into Resource Manager process), 
exposes configuration properties that administrators can modify. It expects a 
file **myriad-config-default.yml** to be present on the Resource Manager's java 
classpath.
 
 Currently, this file is built into Myriad Scheduler jar. So, if you need to 
modify some of the properties in this file, modify them **before** building 
Myriad Scheduler. This sample **myriad-config-default.yml** is a standard 
configuration
 
@@ -45,7 +45,7 @@ profiles:
 nmInstances:
     medium: 1
 # Whether to turn on myriad's auto-rebalancer feature.
-# Currently it's work-in-progress and should be set to 'false'.   
+# Currently it's work-in-progress and should be set to 'false'.
 rebalancer: false
 haEnabled: false
 # Properties for the Node Manager process that's launched by myriad as a 
result of 'flex up' REST call.

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/myriad-dashboard.md
----------------------------------------------------------------------
diff --git a/docs/myriad-dashboard.md b/docs/myriad-dashboard.md
index 2dc8e81..14acfa7 100644
--- a/docs/myriad-dashboard.md
+++ b/docs/myriad-dashboard.md
@@ -4,8 +4,8 @@ The Myriad webapp is a 
[React](http://facebook.github.io/react/) single page app
 
 ## Building
 
-The app uses [NPM](https://www.npmjs.com/) to manage depencies and 
[Gulp](http://gulpjs.com/) to assemble the distribution. 
-The app is served from the webapp/public directory. 
+The app uses [NPM](https://www.npmjs.com/) to manage depencies and 
[Gulp](http://gulpjs.com/) to assemble the distribution.
+The app is served from the webapp/public directory.
 To get setup, install `npm` and `gulp` and from the webapp directory execute
 
 ```
@@ -22,7 +22,7 @@ files change. To launch simply run
 gulp dev
 ```
 
-A browser window should open with the site loaded. If not, it uses [port 
8888](http://localhost:8888) 
+A browser window should open with the site loaded. If not, it uses [port 
8888](http://localhost:8888)
 It is helpful to have myriad setup in Vagrant locally so the api is available. 
Default values are coded into
 the dashboard if Myriad api isn't available.
 

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/myriad-dev.md
----------------------------------------------------------------------
diff --git a/docs/myriad-dev.md b/docs/myriad-dev.md
index 33c0d8b..a69b266 100644
--- a/docs/myriad-dev.md
+++ b/docs/myriad-dev.md
@@ -43,7 +43,7 @@ To build Myriad Scheduler, from 
$PROJECT_HOME/myriad-scheduler run:
 
 ### Building Myriad Executor Only
 
-The `./gradlew build` command builds the **myriad-executor-runnable-xxx.jar** 
and place it inside the **$PROJECT_HOME/myriad-executor/build/libs/** 
directory. 
+The `./gradlew build` command builds the **myriad-executor-runnable-xxx.jar** 
and place it inside the **$PROJECT_HOME/myriad-executor/build/libs/** directory.
 
 To build Myriad Executor individually as a self-contained executor jar, from 
$PROJECT_HOME/myriad-executor, run:
 
@@ -56,7 +56,7 @@ To build Myriad Executor individually as a self-contained 
executor jar, from $PR
 
 To deploy Myriad Scheduler and Executor files:
 
-1. Copy the Myriad Scheduler jar files from the 
$PROJECT_HOME/myriad-scheduler/build/libs/ directory to the 
$YARN_HOME/share/hadoop/yarn/lib/ directory on all nodes in your cluster. 
+1. Copy the Myriad Scheduler jar files from the 
$PROJECT_HOME/myriad-scheduler/build/libs/ directory to the 
$YARN_HOME/share/hadoop/yarn/lib/ directory on all nodes in your cluster.
 2. Copy the Myriad Executor myriad-executor-xxx.jar file from the 
$PROJECT_HOME/myriad-executor/build/libs/ directory to each mesos slave's 
$YARN_HOME/share/hadoop/yarn/lib/ directory.
 3. Copy the myriad-config-default.yml file from 
$PROJECT_HOME/myriad-scheduler/build/src/main/resources/ directory to the 
$YARN_HOME/etc/hadoop directory.
 
@@ -68,7 +68,7 @@ cp myriad-executor/build/libs/myriad-executor-0.1.0.jar 
/opt/hadoop-2.7.0/share/
 cp myriad-scheduler/build/resources/main/myriad-config-default.yml 
/opt/hadoop-2.7.0/etc/hadoop/
 ```
 
-**NOTE:** For advanced users, you can also copy myriad-executor-xxx.jar to any 
other directory on a slave filesystem or it can be copied to HDFS as well. In 
either case, you need to update the executor's path property in the 
myriad-config-default.yml file and prepend the path with either file:// or 
hdfs://, as appropriate. 
+**NOTE:** For advanced users, you can also copy myriad-executor-xxx.jar to any 
other directory on a slave filesystem or it can be copied to HDFS as well. In 
either case, you need to update the executor's path property in the 
myriad-config-default.yml file and prepend the path with either file:// or 
hdfs://, as appropriate.
 
 
 ## Step 3: Configure the Myriad Defaults
@@ -81,7 +81,7 @@ As a minimum, the following Myriad configuration parameters 
must be set:
 
 Enabling Cgroups involves modifying the yarn-site.xml and 
**myriad-config-default.yml** files. If you plan on using Cgroups, you could 
set that property at this time. See [Configuring Cgroup](cgroups.md) for more 
information.
 
-**NOTE:** By copying the **myriad-config-default.yml** file to the 
**/etc/hadoop** directory, you can make changes to the configuration file 
without having to rebuild Myriad. If you specify the Myriad configuration 
parameters before building Myriad, you must rebuild Myriad and redeploy the jar 
files. This is required because the **myriad-config-default.yml** file is 
embedded into the Myriad Scheduler jar. 
+**NOTE:** By copying the **myriad-config-default.yml** file to the 
**/etc/hadoop** directory, you can make changes to the configuration file 
without having to rebuild Myriad. If you specify the Myriad configuration 
parameters before building Myriad, you must rebuild Myriad and redeploy the jar 
files. This is required because the **myriad-config-default.yml** file is 
embedded into the Myriad Scheduler jar.
 
 
 ## Step 4: Configure YARN to Use Myriad
@@ -136,7 +136,7 @@ export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
 
        1. On each node, change directory to $YARN_HOME/etc/hadoop.
        2. Copy mapred-site.xml.template to mapred-site.xml.
-       3. Edit and add the following property to the mapred-site.xml file. 
+       3. Edit and add the following property to the mapred-site.xml file.
 
 ```
 // Add following to $YARN_HOME/etc/hadoop/mapred-site.xml:

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/myriad-fine-grained-scaling.md
----------------------------------------------------------------------
diff --git a/docs/myriad-fine-grained-scaling.md 
b/docs/myriad-fine-grained-scaling.md
index 6f3bbea..f9177b7 100644
--- a/docs/myriad-fine-grained-scaling.md
+++ b/docs/myriad-fine-grained-scaling.md
@@ -2,7 +2,7 @@
 
 The objective of fine-grained scaling is to bring elasticity of resources 
between YARN and other Mesos frameworks. With fine-grained scaling, YARN takes 
resource offers from Mesos and runs enough containers (YARN tasks) that the 
offers can hold and release the resources back to Mesos once the containers 
finish.
 
-* Node Managers that register with the Resource Manager with (0 memory, 0 CPU) 
are eligible for fine-grained scaling, that is, Myriad expands and shrinks the 
capacity of the Node Managers with the resources offered by Mesos. Further, 
Myriad ensures that YARN containers are launched on the Node Managers only if 
Mesos offers enough resources on the slave nodes running those Node Managers. 
+* Node Managers that register with the Resource Manager with (0 memory, 0 CPU) 
are eligible for fine-grained scaling, that is, Myriad expands and shrinks the 
capacity of the Node Managers with the resources offered by Mesos. Further, 
Myriad ensures that YARN containers are launched on the Node Managers only if 
Mesos offers enough resources on the slave nodes running those Node Managers.
 * A zero profile, as well as small, medium, and large profiles are defined in 
the Myriad configuration file, myriad-config-default.yml. A zero profile allows 
administrators to launch Node Managers with (0 memory, 0 CPU) capacities. To 
modify the profile, use the Cluster REST /api/cluster/flexup command).
 * Node Managers that register with the Resource Manager with more than (0 
memory, 0 CPU) are not eligible for fine-grained scaling. For example, Myriad 
does not expand and shrink the capacity of the Node Managers. Node Managers are 
typically launched with a low, medium, or high profile.
 
@@ -12,7 +12,7 @@ The administrator launches Node Managers with zero capacity 
(via the REST /api/c
 
 When a user submits an application to YARN (for example, a MapReduce job), the 
following occurs:
 
-1. The application is added to the Resource Manager's scheduling pipeline. 
+1. The application is added to the Resource Manager's scheduling pipeline.
        * If a Node Manager has a zero profile, the YARN scheduler (for 
example. FairShareScheduler) does not allocate any application containers.
        * If a Node Manager has a non-zero capacity (low, medium, or high 
profiles), containers might be allocated for those Node Managers depending on 
their free capacity.
 2. Myriad receives resource offers from Mesos for slave nodes running zero 
profile Node Managers.

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/myriad-overview.md
----------------------------------------------------------------------
diff --git a/docs/myriad-overview.md b/docs/myriad-overview.md
index 75f1755..0f474eb 100644
--- a/docs/myriad-overview.md
+++ b/docs/myriad-overview.md
@@ -1,6 +1,6 @@
 # Myriad Overview #
 
-Apache Myriad enables the co-existence of Apache Hadoop and Apache Mesos on 
the physical infrastructure. By running Hadoop YARN as a Mesos framework, YARN 
applications and Mesos frameworks can run side-by-side, dynamically sharing 
cluster resources. 
+Apache Myriad enables the co-existence of Apache Hadoop and Apache Mesos on 
the physical infrastructure. By running Hadoop YARN as a Mesos framework, YARN 
applications and Mesos frameworks can run side-by-side, dynamically sharing 
cluster resources.
 
 With Apache Myriad, you can:
 
@@ -19,8 +19,8 @@ Key features include:
        * Fine-grained - Administrators can provision thin node managers that 
are dynamically resized based on application demand.
 * High Availability (HA) and graceful restart of YARN daemons.
 * Ability to launch multiple YARN clusters on the same set of nodes.
-* Support for YARN FairScheduler and all functionality such as hierarchical 
queues with weights. 
-* Ability to deploy YARN Resource Manager using Marathon. This feature 
leverages Marathon's dynamic scheduling, process supervision, and integration 
with service discovery (Mesos-DNS). 
+* Support for YARN FairScheduler and all functionality such as hierarchical 
queues with weights.
+* Ability to deploy YARN Resource Manager using Marathon. This feature 
leverages Marathon's dynamic scheduling, process supervision, and integration 
with service discovery (Mesos-DNS).
 * Ability to run MapReduce v2 and associated libraries such as Hive, Pig, and 
Mahout.
 
 ## Use Cases ##
@@ -38,4 +38,3 @@ As organizations become more reliant on data processing 
technologies like Hadoop
 Using Myriad, these organizations can save money and increase agility by 
provisioning multiple logical Hadoop clusters on a single physical Mesos 
cluster with either shared or dedicated data services. Each logical cluster can 
be tailored to the end user, with a custom configuration and security policy, 
while running a specific version, and with either static or dynamic resources 
allocated to it.
 In a multi-tenant environment, this model means that a shared pool of 
resources can be shared among many data processing frameworks, with each 
capable of allocating additional resources when needed and releasing them when 
not. The top-level Mesos scheduler ensures fairness in the case that multiple 
frameworks are competing for resources.
 In case of a version migration (for example, upgrading only one of two Hadoop 
clusters), this model means that logical Hadoop clusters of different versions 
can be deployed side by side on top of the same shared data. Users can migrate 
workloads from old versions to new versions gradually, add resources to the new 
cluster, and take resources away from the old cluster. After all workloads are 
moved over, the old cluster can be decommissioned.
- 
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/myriad-remote-distribution-configuration.md
----------------------------------------------------------------------
diff --git a/docs/myriad-remote-distribution-configuration.md 
b/docs/myriad-remote-distribution-configuration.md
index 021c92a..a6ea9e8 100644
--- a/docs/myriad-remote-distribution-configuration.md
+++ b/docs/myriad-remote-distribution-configuration.md
@@ -1,6 +1,6 @@
 # Installing for Administrators
 
-The Myriad Scheduler can be configured to automatically download and run the 
Hadoop YARN binaries and get the Hadoop configuration from the resource 
manager. This means you won't have to install and configure Hadoop YARN on each 
machine. 
+The Myriad Scheduler can be configured to automatically download and run the 
Hadoop YARN binaries and get the Hadoop configuration from the resource 
manager. This means you won't have to install and configure Hadoop YARN on each 
machine.
 This information involves bundling Myriad and creating a tarball.
 
 * [Assumptions](#assumptions)
@@ -16,7 +16,7 @@ This information involves bundling Myriad and creating a 
tarball.
 
 The following are assumptions about your environment:
 
-* You are using hadoop-2.7.0 downloaded from 
[hadoop.apache.org](http://hadoop.apache.org).  Specific vendor versions should 
work but may require additional steps. 
+* You are using hadoop-2.7.0 downloaded from 
[hadoop.apache.org](http://hadoop.apache.org).  Specific vendor versions should 
work but may require additional steps.
 
 **NOTE:** The default location for $YARN_HOME is **/opt/hadoop-2.7.0**.
 
@@ -27,7 +27,7 @@ Before building Myriad, configure the Resource Manager as you 
normally would.
 From the project root you build Myriad with the commands
 
 ```
-./gradlew build  
+./gradlew build
 ```
 
 ### Step 2: Deploy the Myriad Files
@@ -42,7 +42,7 @@ cp 
myriad-scheduler/build/resources/main/myriad-config-default.yml /opt/hadoop-2
 
 ### Step 3: Configure the Myriad Defaults
 
-Edit the **$YARN_HOME/etc/hadoop/myriad-config-default.yml** file to configure 
the default parameters. See the sample [Myriad configuration 
file](myriad-configuration.md) for more information. To enable remote binary 
distribution, you must set the following options: 
+Edit the **$YARN_HOME/etc/hadoop/myriad-config-default.yml** file to configure 
the default parameters. See the sample [Myriad configuration 
file](myriad-configuration.md) for more information. To enable remote binary 
distribution, you must set the following options:
 
 
 ```YAML
@@ -52,7 +52,7 @@ frameworkUser: hduser                  # Should be the same 
user running the res
 executor:
   nodeManagerUri: hdfs://namenode:port/dist/hadoop-2.7.0.tar.gz
 yarnEnvironment:
-YARN_HOME: hadoop-2.7.0                # This should be relative if 
nodeManagerUri is set  
+YARN_HOME: hadoop-2.7.0                # This should be relative if 
nodeManagerUri is set
 ```
 
 

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/myriad-scheduler-architecture.md
----------------------------------------------------------------------
diff --git a/docs/myriad-scheduler-architecture.md 
b/docs/myriad-scheduler-architecture.md
index 3ee5733..ea2ce53 100644
--- a/docs/myriad-scheduler-architecture.md
+++ b/docs/myriad-scheduler-architecture.md
@@ -1,6 +1,6 @@
 # Fine-grained Scaling Architecture
 Myriad scheduler is comprised of components that interact with YARN and Mesos
-services. 
+services.
 
 ## Mesos Master Interactions
 

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/node-manager-profiles.md
----------------------------------------------------------------------
diff --git a/docs/node-manager-profiles.md b/docs/node-manager-profiles.md
index 6930c5d..591106d 100644
--- a/docs/node-manager-profiles.md
+++ b/docs/node-manager-profiles.md
@@ -10,7 +10,7 @@ The Node Manager profile is an abstraction for the amount of 
resources a Node Ma
   },
 ```
 
-The following default profiles are configurable. To change a profile, modify 
the Myriad configuration file, `myriad-config-default.yml`. 
+The following default profiles are configurable. To change a profile, modify 
the Myriad configuration file, `myriad-config-default.yml`.
 
 **Note:** If you modify the Myriad configuration file after the initial build, 
you must build and deploy again for the changes to take affect.
 

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/sample-yarn-site.md
----------------------------------------------------------------------
diff --git a/docs/sample-yarn-site.md b/docs/sample-yarn-site.md
index f3c6fdf..51517ee 100644
--- a/docs/sample-yarn-site.md
+++ b/docs/sample-yarn-site.md
@@ -7,7 +7,7 @@ The following is a sample yarn-site.xml file.
 <pre>
 &lt;?xml version="1.0" encoding="UTF-8"?>
 &lt;configuration>
- 
+
 &lt;!-- Site-specific YARN configuration properties -->
    &ltproperty>
         &lt;name>yarn.nodemanager.aux-services&lt;/name>
@@ -61,7 +61,7 @@ The following is a sample yarn-site.xml file.
         &lt;name>yarn.nodemanager.localizer.address&lt;/name>
         &lt;value>${myriad.yarn.nodemanager.localizer.address}&lt;/value>
     &lt;/property>
- 
+
 &lt;!-- Myriad Scheduler configuration -->
     &lt;property>
         &lt;name>yarn.resourcemanager.scheduler.class&lt;/name>
@@ -71,7 +71,7 @@ The following is a sample yarn-site.xml file.
     &lt;property>
   &lt;name>yarn.scheduler.minimum-allocation-vcores&lt;/name>
         &lt;value>0&lt;/value>
-    &lt;/property>    
+    &lt;/property>
     &lt;property>
         &lt;name>yarn.scheduler.minimum-allocation-vcores&lt;/name>
         &lt;value>0&lt;/value>

http://git-wip-us.apache.org/repos/asf/incubator-myriad/blob/ea3d5ede/docs/vagrant.md
----------------------------------------------------------------------
diff --git a/docs/vagrant.md b/docs/vagrant.md
index b30731d..2762ef9 100644
--- a/docs/vagrant.md
+++ b/docs/vagrant.md
@@ -29,7 +29,7 @@ The password for vagrant user is **vagrant'**
 
 To setup YARN/Hadoop inside VM, run following YARN setup shell files:
 
-1 Run the first YARN setup shell command from the vagrant directory to create 
a user hduser in group hadoop. Be sure to remember the password that you 
provide for this user. 
+1 Run the first YARN setup shell command from the vagrant directory to create 
a user hduser in group hadoop. Be sure to remember the password that you 
provide for this user.
 ```
 cd /vagrant
 ./setup-yarn-1.sh
@@ -77,13 +77,13 @@ cd /vagrant
 ./gradlew build
 ```
 
-**NOTE:** If build failure failure occurs, the issue is not with the build 
itself, but a failure to write to disk.  This can happen when you built outside 
the vagrant instance first.  Exit the user `hduser` by typing `exit` and build 
again as the `vagrant` user.   
+**NOTE:** If build failure failure occurs, the issue is not with the build 
itself, but a failure to write to disk.  This can happen when you built outside 
the vagrant instance first.  Exit the user `hduser` by typing `exit` and build 
again as the `vagrant` user.
 
 ### Step 2: Deploy the Myriad Files ###
 
 The Myriad Schedule and Executer jar files and all the runtime dependences as 
well as the Myriad configuration file must be copied to $YARN_HOME.
 
-* The Myriad Scheduler jar and all the runtime dependencies are located at: 
+* The Myriad Scheduler jar and all the runtime dependencies are located at:
 
 ```
 /vagrant/myriad-scheduler/build/libs/*


Reply via email to