Author: tomwhite
Date: Wed Apr  3 18:03:21 2013
New Revision: 1464131

URL: http://svn.apache.org/r1464131
Log:
Merge -r 1464129:1464130 from trunk to branch-2. Fixes: YARN-381. Improve fair 
scheduler docs. Contributed by Sandy Ryza.

Modified:
    hadoop/common/branches/branch-2/hadoop-yarn-project/CHANGES.txt
    
hadoop/common/branches/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm

Modified: hadoop/common/branches/branch-2/hadoop-yarn-project/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-yarn-project/CHANGES.txt?rev=1464131&r1=1464130&r2=1464131&view=diff
==============================================================================
--- hadoop/common/branches/branch-2/hadoop-yarn-project/CHANGES.txt (original)
+++ hadoop/common/branches/branch-2/hadoop-yarn-project/CHANGES.txt Wed Apr  3 
18:03:21 2013
@@ -57,6 +57,8 @@ Release 2.0.5-beta - UNRELEASED
     YARN-447. Move ApplicationComparator in CapacityScheduler to use comparator
     in ApplicationId. (Nemon Lou via vinodkv)
 
+    YARN-381. Improve fair scheduler docs. (Sandy Ryza via tomwhite)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: 
hadoop/common/branches/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm?rev=1464131&r1=1464130&r2=1464131&view=diff
==============================================================================
--- 
hadoop/common/branches/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm
 (original)
+++ 
hadoop/common/branches/branch-2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm
 Wed Apr  3 18:03:21 2013
@@ -85,8 +85,9 @@ Hadoop MapReduce Next Generation - Fair 
   cause too much intermediate data to be created or too much context-switching.
   Limiting the apps does not cause any subsequently submitted apps to fail, 
   only to wait in the scheduler's queue until some of the user's earlier apps 
-  finish. apps to run from each user/queue are chosen in order of priority and 
-  then submit time, as in the default FIFO scheduler in Hadoop.
+  finish. Apps to run from each user/queue are chosen in the same fair sharing
+  manner, but can alternatively be configured to be chosen in order of submit
+  time, as in the default FIFO scheduler in Hadoop.
 
   Certain add-ons are not yet supported which existed in the original (MR1) 
   Fair Scheduler. Among them, is the use of a custom policies governing 
@@ -142,7 +143,9 @@ Hadoop MapReduce Next Generation - Fair 
  * <<<yarn.scheduler.fair.sizebasedweight>>>
   
     * Whether to assign shares to individual apps based on their size, rather 
than
-      providing an equal share to all apps regardless of size. Defaults to 
false.
+      providing an equal share to all apps regardless of size. When set to 
true,
+      apps are weighted by the natural logarithm of one plus the app's total
+      requested memory, divided by the natural logarithm of 2. Defaults to 
false.
 
  * <<<yarn.scheduler.fair.assignmultiple>>>
 
@@ -180,16 +183,29 @@ Allocation file format
  * <<Queue elements>>, which represent queues. Each may contain the following
      properties:
 
-   * minResources: minimum amount of aggregate memory
+   * minResources: minimum MB of aggregate memory the queue expects. If a queue
+     demands resources, and its current allocation is below its configured 
minimum,
+     it will be assigned available resources before any queue that is not in 
this
+     situation.  If multiple queues are in this situation, resources go to the
+     queue with the smallest ratio between allocation and minimum. Note that 
it is
+     possible that a queue that is below its minimum may not immediately get 
up to
+     its minimum when it submits an application, because already-running jobs 
may
+     be using those resources.
 
-   * maxResources: maximum amount of aggregate memory
+   * maxResources: maximum MB of aggregate memory a queue is allowed.  A queue
+     will never be assigned a container that would put it over this limit.
 
    * maxRunningApps: limit the number of apps from the queue to run at once
 
-   * weight: to share the cluster non-proportionally with other queues
+   * weight: to share the cluster non-proportionally with other queues. Weights
+     default to 1, and a queue with weight 2 should receive approximately twice
+     as many resources as a queue with the default weight.
 
    * schedulingMode: either "fifo" or "fair" depending on the in-queue 
scheduling
-     policy desired
+     policy desired. Defaults to "fair". If "fifo", apps with earlier submit
+     times are given preference for containers, but apps submitted later may
+     run concurrently if there is leftover space on the cluster after 
satisfying
+     the earlier app's requests.
 
    * aclSubmitApps: a list of users that can submit apps to the queue. A 
(default)
      value of "*" means that any users can submit apps. A queue inherits the 
ACL of


Reply via email to