YARN-3170. YARN architecture document needs updating. Contirubted by Brahma 
Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edcaae44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edcaae44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edcaae44

Branch: refs/heads/YARN-1197
Commit: edcaae44c10b7e88e68fa97afd32e4da4a9d8df7
Parents: cec1d43
Author: Tsuyoshi Ozawa <oz...@apache.org>
Authored: Wed Jul 15 15:42:41 2015 +0900
Committer: Tsuyoshi Ozawa <oz...@apache.org>
Committed: Wed Jul 15 15:42:41 2015 +0900

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                 |  3 +++
 .../hadoop-yarn-site/src/site/markdown/YARN.md  | 22 +++++++-------------
 2 files changed, 10 insertions(+), 15 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcaae44/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 780c667..0a6f871 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -639,6 +639,9 @@ Release 2.7.2 - UNRELEASED
 
   IMPROVEMENTS
 
+    YARN-3170. YARN architecture document needs updating. (Brahma Reddy Battula
+    via ozawa)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcaae44/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md
index f79272c..f8e8154 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md
@@ -12,14 +12,12 @@
   limitations under the License. See accompanying LICENSE file.
 -->
 
-Apache Hadoop NextGen MapReduce (YARN)
+Apache Hadoop YARN
 ==================
 
-MapReduce has undergone a complete overhaul in hadoop-0.23 and we now have, 
what we call, MapReduce 2.0 (MRv2) or YARN.
+The fundamental idea of YARN is to split up the functionalities of resource 
management and job scheduling/monitoring into separate daemons. The idea is to 
have a global ResourceManager (*RM*) and per-application ApplicationMaster 
(*AM*). An application is either a single job or a DAG of jobs.
 
-The fundamental idea of MRv2 is to split up the two major functionalities of 
the JobTracker, resource management and job scheduling/monitoring, into 
separate daemons. The idea is to have a global ResourceManager (*RM*) and 
per-application ApplicationMaster (*AM*). An application is either a single job 
in the classical sense of Map-Reduce jobs or a DAG of jobs.
-
-The ResourceManager and per-node slave, the NodeManager (*NM*), form the 
data-computation framework. The ResourceManager is the ultimate authority that 
arbitrates resources among all the applications in the system.
+The ResourceManager and the NodeManager form the data-computation framework. 
The ResourceManager is the ultimate authority that arbitrates resources among 
all the applications in the system. The NodeManager is the per-machine 
framework agent who is responsible for containers, monitoring their resource 
usage (cpu, memory, disk, network) and reporting the same to the 
ResourceManager/Scheduler.
 
 The per-application ApplicationMaster is, in effect, a framework specific 
library and is tasked with negotiating resources from the ResourceManager and 
working with the NodeManager(s) to execute and monitor the tasks.
 
@@ -27,16 +25,10 @@ The per-application ApplicationMaster is, in effect, a 
framework specific librar
 
 The ResourceManager has two main components: Scheduler and ApplicationsManager.
 
-The Scheduler is responsible for allocating resources to the various running 
applications subject to familiar constraints of capacities, queues etc. The 
Scheduler is pure scheduler in the sense that it performs no monitoring or 
tracking of status for the application. Also, it offers no guarantees about 
restarting failed tasks either due to application failure or hardware failures. 
The Scheduler performs its scheduling function based the resource requirements 
of the applications; it does so based on the abstract notion of a resource 
*Container* which incorporates elements such as memory, cpu, disk, network etc. 
In the first version, only `memory` is supported.
-
-The Scheduler has a pluggable policy plug-in, which is responsible for 
partitioning the cluster resources among the various queues, applications etc. 
The current Map-Reduce schedulers such as the CapacityScheduler and the 
FairScheduler would be some examples of the plug-in.
-
-The CapacityScheduler supports `hierarchical queues` to allow for more 
predictable sharing of cluster resources
-
-The ApplicationsManager is responsible for accepting job-submissions, 
negotiating the first container for executing the application specific 
ApplicationMaster and provides the service for restarting the ApplicationMaster 
container on failure.
+The Scheduler is responsible for allocating resources to the various running 
applications subject to familiar constraints of capacities, queues etc. The 
Scheduler is pure scheduler in the sense that it performs no monitoring or 
tracking of status for the application. Also, it offers no guarantees about 
restarting failed tasks either due to application failure or hardware failures. 
The Scheduler performs its scheduling function based the resource requirements 
of the applications; it does so based on the abstract notion of a resource 
*Container* which incorporates elements such as memory, cpu, disk, network etc.
 
-The NodeManager is the per-machine framework agent who is responsible for 
containers, monitoring their resource usage (cpu, memory, disk, network) and 
reporting the same to the ResourceManager/Scheduler.
+The Scheduler has a pluggable policy which is responsible for partitioning the 
cluster resources among the various queues, applications etc. The current 
schedulers such as the 
[CapacityScheduler](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html)
 and the 
[FairScheduler](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html)
 would be some examples of plug-ins.
 
-The per-application ApplicationMaster has the responsibility of negotiating 
appropriate resource containers from the Scheduler, tracking their status and 
monitoring for progress.
+The ApplicationsManager is responsible for accepting job-submissions, 
negotiating the first container for executing the application specific 
ApplicationMaster and provides the service for restarting the ApplicationMaster 
container on failure. The per-application ApplicationMaster has the 
responsibility of negotiating appropriate resource containers from the 
Scheduler, tracking their status and monitoring for progress.
 
-MRV2 maintains **API compatibility** with previous stable release 
(hadoop-1.x). This means that all Map-Reduce jobs should still run unchanged on 
top of MRv2 with just a recompile.
+MapReduce in hadoop-2.x maintains **API compatibility** with previous stable 
release (hadoop-1.x). This means that all MapReduce jobs should still run 
unchanged on top of YARN with just a recompile.

Reply via email to