[5/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
deleted file mode 100644
index 57a47fd..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
+++ /dev/null
@@ -1,757 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the License);
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an AS IS BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Map Reduce Next Generation-${project.version} - Writing YARN
-  Applications
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop MapReduce Next Generation - Writing YARN Applications
-
-%{toc|section=1|fromDepth=0}
-
-* Purpose
-
-  This document describes, at a high-level, the way to implement new
-  Applications for YARN.
-
-* Concepts and Flow
-
-  The general concept is that an application submission client submits an
-  application to the YARN ResourceManager (RM). This can be done through
-  setting up a YarnClient object. After YarnClient is started, the
-  client can then set up application context, prepare the very first container 
of
-  the application that contains the ApplicationMaster (AM), and then submit
-  the application. You need to provide information such as the details about 
the
-  local files/jars that need to be available for your application to run, the
-  actual command that needs to be executed (with the necessary command line
-  arguments), any OS environment settings (optional), etc. Effectively, you
-  need to describe the Unix process(es) that needs to be launched for your
-  ApplicationMaster.
-
-  The YARN ResourceManager will then launch the ApplicationMaster (as
-  specified) on an allocated container. The ApplicationMaster communicates with
-  YARN cluster, and handles application execution. It performs operations in an
-  asynchronous fashion. During application launch time, the main tasks of the
-  ApplicationMaster are: a) communicating with the ResourceManager to negotiate
-  and allocate resources for future containers, and b) after container
-  allocation, communicating YARN NodeManagers (NMs) to launch application
-  containers on them. Task a) can be performed asynchronously through an
-  AMRMClientAsync object, with event handling methods specified in a
-  AMRMClientAsync.CallbackHandler type of event handler. The event 
handler
-  needs to be set to the client explicitly. Task b) can be performed by 
launching
-  a runnable object that then launches containers when there are containers
-  allocated. As part of launching this container, the AM has to
-  specify the ContainerLaunchContext that has the launch information 
such as
-  command line specification, environment, etc.
-
-  During the execution of an application, the ApplicationMaster communicates
-  NodeManagers through NMClientAsync object. All container events are
-  handled by NMClientAsync.CallbackHandler, associated with
-  NMClientAsync. A typical callback handler handles client start, stop,
-  status update and error. ApplicationMaster also reports execution progress to
-  ResourceManager by handling the getProgress() method of
-  AMRMClientAsync.CallbackHandler.
-  
-  Other than asynchronous clients, there are synchronous versions for certain
-  workflows (AMRMClient and NMClient). The asynchronous clients are
-  recommended because of (subjectively) simpler usages, and this article
-  will mainly cover the asynchronous clients. Please refer to AMRMClient
-  and NMClient for more information on synchronous clients.
-
-* Interfaces
-
-  The interfaces you'd most like be concerned with are:
-
-  * Client\--\ResourceManager\
-By using YarnClient objects.
-
-  * ApplicationMaster\--\ResourceManager\
-By using AMRMClientAsync objects, handling events asynchronously by
-AMRMClientAsync.CallbackHandler
-
-  * ApplicationMaster\--\NodeManager\
-Launch containers. Communicate with NodeManagers
-by using NMClientAsync objects, handling container events by
-NMClientAsync.CallbackHandler
-
-  []
-
-  Note
-  
-* The three main protocols for YARN application (ApplicationClientProtocol,
-  ApplicationMasterProtocol and ContainerManagementProtocol) are still
-  preserved. The 3 

[2/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md
new file mode 100644
index 000..e516afb
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md
@@ -0,0 +1,181 @@
+!---
+  Licensed under the Apache License, Version 2.0 (the License);
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an AS IS BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+--
+
+ResourceManger Restart
+==
+
+* [Overview](#Overview)
+* [Feature](#Feature)
+* [Configurations](#Configurations)
+* [Enable RM Restart](#Enable_RM_Restart)
+* [Configure the state-store for persisting the RM 
state](#Configure_the_state-store_for_persisting_the_RM_state)
+* [How to choose the state-store 
implementation](#How_to_choose_the_state-store_implementation)
+* [Configurations for Hadoop FileSystem based state-store 
implementation](#Configurations_for_Hadoop_FileSystem_based_state-store_implementation)
+* [Configurations for ZooKeeper based state-store 
implementation](#Configurations_for_ZooKeeper_based_state-store_implementation)
+* [Configurations for LevelDB based state-store 
implementation](#Configurations_for_LevelDB_based_state-store_implementation)
+* [Configurations for work-preserving RM 
recovery](#Configurations_for_work-preserving_RM_recovery)
+* [Notes](#Notes)
+* [Sample Configurations](#Sample_Configurations)
+
+Overview
+
+
+ResourceManager is the central authority that manages resources and schedules 
applications running atop of YARN. Hence, it is potentially a single point of 
failure in a Apache YARN cluster.
+`
+This document gives an overview of ResourceManager Restart, a feature that 
enhances ResourceManager to keep functioning across restarts and also makes 
ResourceManager down-time invisible to end-users.
+
+ResourceManager Restart feature is divided into two phases: 
+
+* **ResourceManager Restart Phase 1 (Non-work-preserving RM restart)**: 
Enhance RM to persist application/attempt state and other credentials 
information in a pluggable state-store. RM will reload this information from 
state-store upon restart and re-kick the previously running applications. Users 
are not required to re-submit the applications.
+
+* **ResourceManager Restart Phase 2 (Work-preserving RM restart)**: Focus on 
re-constructing the running state of ResourceManager by combining the container 
statuses from NodeManagers and container requests from ApplicationMasters upon 
restart. The key difference from phase 1 is that previously running 
applications will not be killed after RM restarts, and so applications won't 
lose its work because of RM outage.
+
+Feature
+---
+
+* **Phase 1: Non-work-preserving RM restart** 
+
+ As of Hadoop 2.4.0 release, only ResourceManager Restart Phase 1 is 
implemented which is described below.
+
+ The overall concept is that RM will persist the application metadata 
(i.e. ApplicationSubmissionContext) in a pluggable state-store when client 
submits an application and also saves the final status of the application such 
as the completion state (failed, killed, finished) and diagnostics when the 
application completes. Besides, RM also saves the credentials like security 
keys, tokens to work in a secure  environment. Any time RM shuts down, as long 
as the required information (i.e.application metadata and the alongside 
credentials if running in a secure environment) is available in the 
state-store, when RM restarts, it can pick up the application metadata from the 
state-store and re-submit the application. RM won't re-submit the applications 
if they were already completed (i.e. failed, killed, finished) before RM went 
down.
+
+ NodeManagers and clients during the down-time of RM will keep polling RM 
until RM comes up. When RM becomes alive, it will send a re-sync command to all 
the NodeManagers and ApplicationMasters it was talking to via heartbeats. As of 
Hadoop 2.4.0 release, the behaviors for NodeManagers and ApplicationMasters to 
handle this command are: NMs will kill all its managed containers and 
re-register with RM. From the RM's perspective, these re-registered 
NodeManagers are similar to the 

[4/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
new file mode 100644
index 000..1812a44
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
@@ -0,0 +1,233 @@
+!---
+  Licensed under the Apache License, Version 2.0 (the License);
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an AS IS BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+--
+
+Hadoop: Fair Scheduler
+==
+
+* [Purpose](#Purpose)
+* [Introduction](#Introduction)
+* [Hierarchical queues with pluggable 
policies](#Hierarchical_queues_with_pluggable_policies)
+* [Automatically placing applications in 
queues](#Automatically_placing_applications_in_queues)
+* [Installation](#Installation)
+* [Configuration](#Configuration)
+* [Properties that can be placed in 
yarn-site.xml](#Properties_that_can_be_placed_in_yarn-site.xml)
+* [Allocation file format](#Allocation_file_format)
+* [Queue Access Control Lists](#Queue_Access_Control_Lists)
+* [Administration](#Administration)
+* [Modifying configuration at runtime](#Modifying_configuration_at_runtime)
+* [Monitoring through web UI](#Monitoring_through_web_UI)
+* [Moving applications between queues](#Moving_applications_between_queues)
+
+##Purpose
+
+This document describes the `FairScheduler`, a pluggable scheduler for Hadoop 
that allows YARN applications to share resources in large clusters fairly.
+
+##Introduction
+
+Fair scheduling is a method of assigning resources to applications such that 
all apps get, on average, an equal share of resources over time. Hadoop NextGen 
is capable of scheduling multiple resource types. By default, the Fair 
Scheduler bases scheduling fairness decisions only on memory. It can be 
configured to schedule with both memory and CPU, using the notion of Dominant 
Resource Fairness developed by Ghodsi et al. When there is a single app 
running, that app uses the entire cluster. When other apps are submitted, 
resources that free up are assigned to the new apps, so that each app 
eventually on gets roughly the same amount of resources. Unlike the default 
Hadoop scheduler, which forms a queue of apps, this lets short apps finish in 
reasonable time while not starving long-lived apps. It is also a reasonable way 
to share a cluster between a number of users. Finally, fair sharing can also 
work with app priorities - the priorities are used as weights to determine the 
fraction of t
 otal resources that each app should get.
+
+The scheduler organizes apps further into queues, and shares resources 
fairly between these queues. By default, all users share a single queue, named 
default. If an app specifically lists a queue in a container resource 
request, the request is submitted to that queue. It is also possible to assign 
queues based on the user name included with the request through configuration. 
Within each queue, a scheduling policy is used to share resources between the 
running apps. The default is memory-based fair sharing, but FIFO and 
multi-resource with Dominant Resource Fairness can also be configured. Queues 
can be arranged in a hierarchy to divide resources and configured with weights 
to share the cluster in specific proportions.
+
+In addition to providing fair sharing, the Fair Scheduler allows assigning 
guaranteed minimum shares to queues, which is useful for ensuring that certain 
users, groups or production applications always get sufficient resources. When 
a queue contains apps, it gets at least its minimum share, but when the queue 
does not need its full guaranteed share, the excess is split between other 
running apps. This lets the scheduler guarantee capacity for queues while 
utilizing resources efficiently when these queues don't contain applications.
+
+The Fair Scheduler lets all apps run by default, but it is also possible to 
limit the number of running apps per user and per queue through the config 
file. This can be useful when a user must submit hundreds of apps at once, or 
in general to improve performance if running too many apps at once would cause 
too much intermediate data to be created or too much context-switching. 
Limiting the apps does not cause any subsequently submitted apps 

[3/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md
new file mode 100644
index 000..b1591bb
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md
@@ -0,0 +1,2640 @@
+!---
+  Licensed under the Apache License, Version 2.0 (the License);
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an AS IS BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+--
+
+ResourceManager REST API's.
+===
+
+* [Overview](#Overview)
+* [Cluster Information API](#Cluster_Information_API)
+* [Cluster Metrics API](#Cluster_Metrics_API)
+* [Cluster Scheduler API](#Cluster_Scheduler_API)
+* [Cluster Applications API](#Cluster_Applications_API)
+* [Cluster Application Statistics API](#Cluster_Application_Statistics_API)
+* [Cluster Application API](#Cluster_Application_API)
+* [Cluster Application Attempts API](#Cluster_Application_Attempts_API)
+* [Cluster Nodes API](#Cluster_Nodes_API)
+* [Cluster Node API](#Cluster_Node_API)
+* [Cluster Writeable APIs](#Cluster_Writeable_APIs)
+* [Cluster New Application API](#Cluster_New_Application_API)
+* [Cluster Applications API(Submit 
Application)](#Cluster_Applications_APISubmit_Application)
+* [Cluster Application State API](#Cluster_Application_State_API)
+* [Cluster Application Queue API](#Cluster_Application_Queue_API)
+* [Cluster Delegation Tokens API](#Cluster_Delegation_Tokens_API)
+
+Overview
+
+
+The ResourceManager REST API's allow the user to get information about the 
cluster - status on the cluster, metrics on the cluster, scheduler information, 
information about nodes in the cluster, and information about applications on 
the cluster.
+
+Cluster Information API
+---
+
+The cluster information resource provides overall information about the 
cluster.
+
+### URI
+
+Both of the following URI's give you the cluster information.
+
+  * http://rm http address:port/ws/v1/cluster
+  * http://rm http address:port/ws/v1/cluster/info
+
+### HTTP Operations Supported
+
+  * GET
+
+### Query Parameters Supported
+
+  None
+
+### Elements of the *clusterInfo* object
+
+| Item | Data Type | Description |
+|: |: |: |
+| id | long | The cluster id |
+| startedOn | long | The time the cluster started (in ms since epoch) |
+| state | string | The ResourceManager state - valid values are: NOTINITED, 
INITED, STARTED, STOPPED |
+| haState | string | The ResourceManager HA state - valid values are: 
INITIALIZING, ACTIVE, STANDBY, STOPPED |
+| resourceManagerVersion | string | Version of the ResourceManager |
+| resourceManagerBuildVersion | string | ResourceManager build string with 
build version, user, and checksum |
+| resourceManagerVersionBuiltOn | string | Timestamp when ResourceManager was 
built (in ms since epoch) |
+| hadoopVersion | string | Version of hadoop common |
+| hadoopBuildVersion | string | Hadoop common build string with build version, 
user, and checksum |
+| hadoopVersionBuiltOn | string | Timestamp when hadoop common was built(in ms 
since epoch) |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+  GET http://rm http address:port/ws/v1/cluster/info
+
+Response Header:
+
+  HTTP/1.1 200 OK
+  Content-Type: application/json
+  Transfer-Encoding: chunked
+  Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  clusterInfo:
+  {
+id:1324053971963,
+startedOn:1324053971963,
+state:STARTED,
+resourceManagerVersion:0.23.1-SNAPSHOT,
+resourceManagerBuildVersion:0.23.1-SNAPSHOT from 1214049 by user1 
source checksum 050cd664439d931c8743a6428fd6a693,
+resourceManagerVersionBuiltOn:Tue Dec 13 22:12:48 CST 2011,
+hadoopVersion:0.23.1-SNAPSHOT,
+hadoopBuildVersion:0.23.1-SNAPSHOT from 1214049 by user1 source 
checksum 11458df3bb77342dca5f917198fad328,
+hadoopVersionBuiltOn:Tue Dec 13 22:12:26 CST 2011
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+  Accept: application/xml
+  GET http://rm http address:port/ws/v1/cluster/info
+
+Response Header:
+
+  HTTP/1.1 200 OK
+  Content-Type: application/xml
+  Content-Length: 712
+  Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+?xml 

[1/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/trunk edcecedc1 - 2e44b75f7


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
new file mode 100644
index 000..5e4df9f
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
@@ -0,0 +1,591 @@
+!---
+  Licensed under the Apache License, Version 2.0 (the License);
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an AS IS BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+--
+
+Hadoop: Writing YARN Applications
+=
+
+* [Purpose](#Purpose)
+* [Concepts and Flow](#Concepts_and_Flow)
+* [Interfaces](#Interfaces)
+* [Writing a Simple Yarn Application](#Writing_a_Simple_Yarn_Application)
+* [Writing a simple Client](#Writing_a_simple_Client)
+* [Writing an ApplicationMaster (AM)](#Writing_an_ApplicationMaster_AM)
+* [FAQ](#FAQ)
+* [How can I distribute my application's jars to all of the nodes in the 
YARN cluster that need 
it?](#How_can_I_distribute_my_applications_jars_to_all_of_the_nodes_in_the_YARN_cluster_that_need_it)
+* [How do I get the ApplicationMaster's 
ApplicationAttemptId?](#How_do_I_get_the_ApplicationMasters_ApplicationAttemptId)
+* [Why my container is killed by the 
NodeManager?](#Why_my_container_is_killed_by_the_NodeManager)
+* [How do I include native libraries?](#How_do_I_include_native_libraries)
+* [Useful Links](#Useful_Links)
+* [Sample Code](#Sample_Code)
+
+Purpose
+---
+
+This document describes, at a high-level, the way to implement new 
Applications for YARN.
+
+Concepts and Flow
+-
+
+The general concept is that an *application submission client* submits an 
*application* to the YARN *ResourceManager* (RM). This can be done through 
setting up a `YarnClient` object. After `YarnClient` is started, the client can 
then set up application context, prepare the very first container of the 
application that contains the *ApplicationMaster* (AM), and then submit the 
application. You need to provide information such as the details about the 
local files/jars that need to be available for your application to run, the 
actual command that needs to be executed (with the necessary command line 
arguments), any OS environment settings (optional), etc. Effectively, you need 
to describe the Unix process(es) that needs to be launched for your 
ApplicationMaster.
+
+The YARN ResourceManager will then launch the ApplicationMaster (as specified) 
on an allocated container. The ApplicationMaster communicates with YARN 
cluster, and handles application execution. It performs operations in an 
asynchronous fashion. During application launch time, the main tasks of the 
ApplicationMaster are: a) communicating with the ResourceManager to negotiate 
and allocate resources for future containers, and b) after container 
allocation, communicating YARN *NodeManager*s (NMs) to launch application 
containers on them. Task a) can be performed asynchronously through an 
`AMRMClientAsync` object, with event handling methods specified in a 
`AMRMClientAsync.CallbackHandler` type of event handler. The event handler 
needs to be set to the client explicitly. Task b) can be performed by launching 
a runnable object that then launches containers when there are containers 
allocated. As part of launching this container, the AM has to specify the 
`ContainerLaunchContext` that has
  the launch information such as command line specification, environment, etc.
+
+During the execution of an application, the ApplicationMaster communicates 
NodeManagers through `NMClientAsync` object. All container events are handled 
by `NMClientAsync.CallbackHandler`, associated with `NMClientAsync`. A typical 
callback handler handles client start, stop, status update and error. 
ApplicationMaster also reports execution progress to ResourceManager by 
handling the `getProgress()` method of `AMRMClientAsync.CallbackHandler`.
+
+Other than asynchronous clients, there are synchronous versions for certain 
workflows (`AMRMClient` and `NMClient`). The asynchronous clients are 
recommended because of (subjectively) simpler usages, and this article will 
mainly cover the asynchronous 

[7/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
deleted file mode 100644
index 69728fb..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
+++ /dev/null
@@ -1,3104 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the License);
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an AS IS BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  ResourceManager REST API's.
-  ---
-  ---
-  ${maven.build.timestamp}
-
-ResourceManager REST API's.
-
-%{toc|section=1|fromDepth=0|toDepth=2}
-
-* Overview
-
-  The ResourceManager REST API's allow the user to get information about the 
cluster - status on the cluster, metrics on the cluster, scheduler information, 
information about nodes in the cluster, and information about applications on 
the cluster.
-  
-* Cluster Information API
-
-  The cluster information resource provides overall information about the 
cluster. 
-
-** URI
-
-  Both of the following URI's give you the cluster information.
-
---
-  * http://rm http address:port/ws/v1/cluster
-  * http://rm http address:port/ws/v1/cluster/info
---
-
-** HTTP Operations Supported
-
---
-  * GET
---
-
-** Query Parameters Supported
-
---
-  None
---
-
-** Elements of the clusterInfo object
-
-*---+--+---+
-|| Item || Data Type   || Description   |
-*---+--+---+
-| id| long | The cluster id |
-*---+--+---+
-| startedOn | long | The time the cluster started (in ms since 
epoch)|
-*---+--+---+
-| state | string | The ResourceManager state - valid values are: 
NOTINITED, INITED, STARTED, STOPPED|
-*---+--+---+
-| haState   | string | The ResourceManager HA state - valid values are: 
INITIALIZING, ACTIVE, STANDBY, STOPPED|
-*---+--+---+
-| resourceManagerVersion | string  | Version of the ResourceManager |
-*---+--+---+
-| resourceManagerBuildVersion | string  | ResourceManager build string with 
build version, user, and checksum |
-*---+--+---+
-| resourceManagerVersionBuiltOn | string  | Timestamp when ResourceManager was 
built (in ms since epoch)|
-*---+--+---+
-| hadoopVersion | string  | Version of hadoop common |
-*---+--+---+
-| hadoopBuildVersion | string  | Hadoop common build string with build 
version, user, and checksum |
-*---+--+---+
-| hadoopVersionBuiltOn | string  | Timestamp when hadoop common was built(in 
ms since epoch)|
-*---+--+---+
-
-** Response Examples
-
-  JSON response
-
-  HTTP Request:
-
---
-  GET http://rm http address:port/ws/v1/cluster/info
---
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  clusterInfo:
-  {
-id:1324053971963,
-startedOn:1324053971963,
-state:STARTED,
-resourceManagerVersion:0.23.1-SNAPSHOT,
-resourceManagerBuildVersion:0.23.1-SNAPSHOT from 1214049 by user1 
source checksum 050cd664439d931c8743a6428fd6a693,
-resourceManagerVersionBuiltOn:Tue Dec 13 22:12:48 CST 2011,
-hadoopVersion:0.23.1-SNAPSHOT,
-hadoopBuildVersion:0.23.1-SNAPSHOT from 1214049 by user1 source 
checksum 11458df3bb77342dca5f917198fad328,
-hadoopVersionBuiltOn:Tue Dec 13 22:12:26 CST 2011
-  }
-}
-+---+
-
-  XML response
-
-  HTTP Request:
-
--
-  Accept: application/xml
-  GET http://rm http address:port/ws/v1/cluster/info
--
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 712
-  Server: Jetty(6.1.26)
-+---+
-
-  Response 

[8/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
deleted file mode 100644
index 36b8621..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
+++ /dev/null
@@ -1,645 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the License);
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an AS IS BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  NodeManager REST API's.
-  ---
-  ---
-  ${maven.build.timestamp}
-
-NodeManager REST API's.
-
-%{toc|section=1|fromDepth=0|toDepth=2}
-
-* Overview
-
-  The NodeManager REST API's allow the user to get status on the node and 
information about applications and containers running on that node. 
-  
-* NodeManager Information API
-
-  The node information resource provides overall information about that 
particular node.
-
-** URI
-
-  Both of the following URI's give you the cluster information.
-
---
-  * http://nm http address:port/ws/v1/node
-  * http://nm http address:port/ws/v1/node/info
---
-
-** HTTP Operations Supported
-
---
-  * GET
---
-
-** Query Parameters Supported
-
---
-  None
---
-
-** Elements of the nodeInfo object
-
-*---+--+---+
-|| Item || Data Type   || Description   |
-*---+--+---+
-| id| long | The NodeManager id |
-*---+--+---+
-| nodeHostName | string  | The host name of the NodeManager |
-*---+--+---+
-| totalPmemAllocatedContainersMB | long | The amount of physical 
memory allocated for use by containers in MB |
-*---+--+---+
-| totalVmemAllocatedContainersMB | long | The amount of virtual memory 
allocated for use by containers in MB |
-*---+--+---+
-| totalVCoresAllocatedContainers | long | The number of virtual cores 
allocated for use by containers |
-*---+--+---+
-| lastNodeUpdateTime | long | The last timestamp at which the health 
report was received (in ms since epoch)|
-*---+--+---+
-| healthReport | string  | The diagnostic health report of the node |
-*---+--+---+
-| nodeHealthy | boolean | true/false indicator of if the node is healthy|
-*---+--+---+
-| nodeManagerVersion | string  | Version of the NodeManager |
-*---+--+---+
-| nodeManagerBuildVersion | string  | NodeManager build string with build 
version, user, and checksum |
-*---+--+---+
-| nodeManagerVersionBuiltOn | string  | Timestamp when NodeManager was 
built(in ms since epoch) |
-*---+--+---+
-| hadoopVersion | string  | Version of hadoop common |
-*---+--+---+
-| hadoopBuildVersion | string  | Hadoop common build string with build 
version, user, and checksum |
-*---+--+---+
-| hadoopVersionBuiltOn | string  | Timestamp when hadoop common was built(in 
ms since epoch) |
-*---+--+---+
-
-** Response Examples
-
-  JSON response
-
-  HTTP Request:
-
---
-  GET http://nm http address:port/ws/v1/node/info
---
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   nodeInfo : {
-  hadoopVersionBuiltOn : Mon Jan  9 14:58:42 UTC 2012,
-  nodeManagerBuildVersion : 0.23.1-SNAPSHOT from 1228355 by user1 
source checksum 20647f76c36430e888cc7204826a445c,
-  lastNodeUpdateTime : 132666126,
-  totalVmemAllocatedContainersMB : 17203,
-  totalVCoresAllocatedContainers : 8,
-  nodeHealthy 

[9/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via 
aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e44b75f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e44b75f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e44b75f

Branch: refs/heads/trunk
Commit: 2e44b75f729009d33e309d1366bf86746443db81
Parents: edceced
Author: Allen Wittenauer a...@apache.org
Authored: Fri Feb 27 20:39:44 2015 -0800
Committer: Allen Wittenauer a...@apache.org
Committed: Fri Feb 27 20:39:44 2015 -0800

--
 hadoop-yarn-project/CHANGES.txt |3 +
 .../src/site/apt/CapacityScheduler.apt.vm   |  368 ---
 .../src/site/apt/DockerContainerExecutor.apt.vm |  204 --
 .../src/site/apt/FairScheduler.apt.vm   |  483 ---
 .../src/site/apt/NodeManager.apt.vm |   64 -
 .../src/site/apt/NodeManagerCgroups.apt.vm  |   77 -
 .../src/site/apt/NodeManagerRest.apt.vm |  645 
 .../src/site/apt/NodeManagerRestart.apt.vm  |   86 -
 .../src/site/apt/ResourceManagerHA.apt.vm   |  233 --
 .../src/site/apt/ResourceManagerRest.apt.vm | 3104 --
 .../src/site/apt/ResourceManagerRestart.apt.vm  |  298 --
 .../src/site/apt/SecureContainer.apt.vm |  176 -
 .../src/site/apt/TimelineServer.apt.vm  |  260 --
 .../src/site/apt/WebApplicationProxy.apt.vm |   49 -
 .../src/site/apt/WebServicesIntro.apt.vm|  593 
 .../src/site/apt/WritingYarnApplications.apt.vm |  757 -
 .../hadoop-yarn-site/src/site/apt/YARN.apt.vm   |   77 -
 .../src/site/apt/YarnCommands.apt.vm|  369 ---
 .../hadoop-yarn-site/src/site/apt/index.apt.vm  |   82 -
 .../src/site/markdown/CapacityScheduler.md  |  186 ++
 .../site/markdown/DockerContainerExecutor.md.vm |  154 +
 .../src/site/markdown/FairScheduler.md  |  233 ++
 .../src/site/markdown/NodeManager.md|   57 +
 .../src/site/markdown/NodeManagerCgroups.md |   57 +
 .../src/site/markdown/NodeManagerRest.md|  543 +++
 .../src/site/markdown/NodeManagerRestart.md |   53 +
 .../src/site/markdown/ResourceManagerHA.md  |  140 +
 .../src/site/markdown/ResourceManagerRest.md| 2640 +++
 .../src/site/markdown/ResourceManagerRestart.md |  181 +
 .../src/site/markdown/SecureContainer.md|  135 +
 .../src/site/markdown/TimelineServer.md |  231 ++
 .../src/site/markdown/WebApplicationProxy.md|   24 +
 .../src/site/markdown/WebServicesIntro.md   |  569 
 .../site/markdown/WritingYarnApplications.md|  591 
 .../hadoop-yarn-site/src/site/markdown/YARN.md  |   42 +
 .../src/site/markdown/YarnCommands.md   |  272 ++
 .../hadoop-yarn-site/src/site/markdown/index.md |   75 +
 37 files changed, 6186 insertions(+), 7925 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index e7af84b..02b1831 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -20,6 +20,9 @@ Trunk - Unreleased
 YARN-2980. Move health check script related functionality to hadoop-common
 (Varun Saxena via aw)
 
+YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty
+via aw)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
deleted file mode 100644
index 8528c1a..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
+++ /dev/null
@@ -1,368 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the License);
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an AS IS BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Map Reduce Next Generation-${project.version} - Capacity Scheduler
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop MapReduce Next Generation - Capacity 

[6/9] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

2015-02-27 Thread aw
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm
deleted file mode 100644
index a08c19d..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm
+++ /dev/null
@@ -1,298 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the License);
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an AS IS BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  ResourceManager Restart
-  ---
-  ---
-  ${maven.build.timestamp}
-
-ResourceManager Restart
-
-%{toc|section=1|fromDepth=0}
-
-* {Overview}
-
-  ResourceManager is the central authority that manages resources and schedules
-  applications running atop of YARN. Hence, it is potentially a single point of
-  failure in a Apache YARN cluster.
-
-  This document gives an overview of ResourceManager Restart, a feature that
-  enhances ResourceManager to keep functioning across restarts and also makes
-  ResourceManager down-time invisible to end-users.
-
-  ResourceManager Restart feature is divided into two phases:
-
-  ResourceManager Restart Phase 1 (Non-work-preserving RM restart):
-  Enhance RM to persist application/attempt state
-  and other credentials information in a pluggable state-store. RM will reload
-  this information from state-store upon restart and re-kick the previously
-  running applications. Users are not required to re-submit the applications.
-
-  ResourceManager Restart Phase 2 (Work-preserving RM restart):
-  Focus on re-constructing the running state of ResourceManager by combining
-  the container statuses from NodeManagers and container requests from 
ApplicationMasters
-  upon restart. The key difference from phase 1 is that previously running 
applications
-  will not be killed after RM restarts, and so applications won't lose its work
-  because of RM outage.
-
-* {Feature}
-
-** Phase 1: Non-work-preserving RM restart
-
-  As of Hadoop 2.4.0 release, only ResourceManager Restart Phase 1 is 
implemented which
-  is described below.
-
-  The overall concept is that RM will persist the application metadata
-  (i.e. ApplicationSubmissionContext) in
-  a pluggable state-store when client submits an application and also saves 
the final status
-  of the application such as the completion state (failed, killed, finished) 
-  and diagnostics when the application completes. Besides, RM also saves
-  the credentials like security keys, tokens to work in a secure environment.
-  Any time RM shuts down, as long as the required information (i.e.application 
metadata
-  and the alongside credentials if running in a secure environment) is 
available
-  in the state-store, when RM restarts, it can pick up the application metadata
-  from the state-store and re-submit the application. RM won't re-submit the
-  applications if they were already completed (i.e. failed, killed, finished)
-  before RM went down.
-
-  NodeManagers and clients during the down-time of RM will keep polling RM 
until 
-  RM comes up. When RM becomes alive, it will send a re-sync command to
-  all the NodeManagers and ApplicationMasters it was talking to via heartbeats.
-  As of Hadoop 2.4.0 release, the behaviors for NodeManagers and 
ApplicationMasters to handle this command
-  are: NMs will kill all its managed containers and re-register with RM. From 
the
-  RM's perspective, these re-registered NodeManagers are similar to the newly 
joining NMs. 
-  AMs(e.g. MapReduce AM) are expected to shutdown when they receive the 
re-sync command.
-  After RM restarts and loads all the application metadata, credentials from 
state-store
-  and populates them into memory, it will create a new
-  attempt (i.e. ApplicationMaster) for each application that was not yet 
completed
-  and re-kick that application as usual. As described before, the previously 
running
-  applications' work is lost in this manner since they are essentially killed 
by
-  RM via the re-sync command on restart.
-
-** Phase 2: Work-preserving RM restart
-
-  As of Hadoop 2.6.0, we further enhanced RM restart feature to address the 
problem 
-  to not kill any applications running on YARN cluster if RM restarts.
-
-  Beyond all the groundwork that has been done in Phase 1 

[8/8] hadoop git commit: YARN-3125. Made the distributed shell use timeline service next gen and add an integration test for it. Contributed by Junping Du and Li Lu.

2015-02-27 Thread zjshen
YARN-3125. Made the distributed shell use timeline service next gen and add an 
integration test for it. Contributed by Junping Du and Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf08f7f0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf08f7f0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf08f7f0

Branch: refs/heads/YARN-2928
Commit: bf08f7f0ed4900ce52f98137297dd1a47ba2a536
Parents: 667f69c
Author: Zhijie Shen zjs...@apache.org
Authored: Fri Feb 27 08:46:42 2015 -0800
Committer: Zhijie Shen zjs...@apache.org
Committed: Fri Feb 27 08:46:42 2015 -0800

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../distributedshell/ApplicationMaster.java | 163 +--
 .../applications/distributedshell/Client.java   |  20 ++-
 .../distributedshell/TestDistributedShell.java  |  79 +++--
 .../aggregator/BaseAggregatorService.java   |   6 +
 .../aggregator/PerNodeAggregatorServer.java |   4 +-
 6 files changed, 250 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf08f7f0/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index c659126..1f4a68d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -20,6 +20,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3087. Made the REST server of per-node aggregator work alone in NM
 daemon. (Li Lu via zjshen)
 
+YARN-3125. Made the distributed shell use timeline service next gen and
+add an integration test for it. (Junping Du and Li Lu via zjshen)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf08f7f0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
index a9a7091..db49166 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
@@ -207,6 +207,8 @@ public class ApplicationMaster {
   private int appMasterRpcPort = -1;
   // Tracking url to which app master publishes info for clients to monitor
   private String appMasterTrackingUrl = ;
+  
+  private boolean newTimelineService = false;
 
   // App Master configuration
   // No. of containers to run shell command on
@@ -360,7 +362,8 @@ public class ApplicationMaster {
 No. of containers on which the shell command needs to be executed);
 opts.addOption(priority, true, Application Priority. Default 0);
 opts.addOption(debug, false, Dump out debug information);
-
+opts.addOption(timeline_service_version, true, 
+Version for timeline service);
 opts.addOption(help, false, Print usage);
 CommandLine cliParser = new GnuParser().parse(opts, args);
 
@@ -499,13 +502,30 @@ public class ApplicationMaster {
 
 if (conf.getBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED,
   YarnConfiguration.DEFAULT_TIMELINE_SERVICE_ENABLED)) {
+  if (cliParser.hasOption(timeline_service_version)) {
+String timelineServiceVersion = 
+cliParser.getOptionValue(timeline_service_version, v1);
+if (timelineServiceVersion.trim().equalsIgnoreCase(v1)) {
+  newTimelineService = false;
+} else if (timelineServiceVersion.trim().equalsIgnoreCase(v2)) {
+  newTimelineService = true;
+} else {
+  throw new IllegalArgumentException(
+  timeline_service_version is not set properly, should be 'v1' or 
'v2');
+}
+  }
   // Creating the Timeline Client
-  timelineClient = TimelineClient.createTimelineClient();
+  timelineClient = TimelineClient.createTimelineClient(
+  appAttemptID.getApplicationId());
   timelineClient.init(conf);
   timelineClient.start();
 } else {
   timelineClient = null;
   LOG.warn(Timeline 

[4/8] hadoop git commit: HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get files list for scanning (Contributed by J.Andreina)

2015-02-27 Thread zjshen
HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get files 
list for scanning (Contributed by J.Andreina)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4f75b156
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4f75b156
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4f75b156

Branch: refs/heads/YARN-2928
Commit: 4f75b15628a76881efc39054612dc128e23d27be
Parents: 2954e65
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Feb 27 16:36:28 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Feb 27 16:36:28 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../apache/hadoop/hdfs/server/datanode/DataNode.java|  2 +-
 .../hadoop/hdfs/server/datanode/DirectoryScanner.java   | 12 +---
 .../hdfs/server/datanode/TestDirectoryScanner.java  |  9 ++---
 4 files changed, 19 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f75b156/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ba553dc..8556afd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1040,6 +1040,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7774. Unresolved symbols error while compiling HDFS on Windows 7/32 
bit.
 (Kiran Kumar M R via cnauroth)
 
+HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get
+files list for scanning (J.Andreina via vinayakumarb)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f75b156/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index f233e02..92ddb7b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -815,7 +815,7 @@ public class DataNode extends ReconfigurableBase
   reason = verifcation is not supported by SimulatedFSDataset;
 } 
 if (reason == null) {
-  directoryScanner = new DirectoryScanner(data, conf);
+  directoryScanner = new DirectoryScanner(this, data, conf);
   directoryScanner.start();
 } else {
   LOG.info(Periodic Directory Tree Verification scan is disabled because 
 +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f75b156/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 09c2914..c7ee21e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -63,6 +63,7 @@ public class DirectoryScanner implements Runnable {
   private final long scanPeriodMsecs;
   private volatile boolean shouldRun = false;
   private boolean retainDiffs = false;
+  private final DataNode datanode;
 
   final ScanInfoPerBlockPool diffs = new ScanInfoPerBlockPool();
   final MapString, Stats stats = new HashMapString, Stats();
@@ -308,7 +309,8 @@ public class DirectoryScanner implements Runnable {
 }
   }
 
-  DirectoryScanner(FsDatasetSpi? dataset, Configuration conf) {
+  DirectoryScanner(DataNode datanode, FsDatasetSpi? dataset, Configuration 
conf) {
+this.datanode = datanode;
 this.dataset = dataset;
 int interval = 
conf.getInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_KEY,
 DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_DEFAULT);
@@ -547,7 +549,7 @@ public class DirectoryScanner implements Runnable {
 for (int i = 0; i  volumes.size(); i++) {
   if (isValid(dataset, volumes.get(i))) {
 ReportCompiler reportCompiler =
-  new ReportCompiler(volumes.get(i));
+  new ReportCompiler(datanode,volumes.get(i));
 FutureScanInfoPerBlockPool result = 
 

[1/8] hadoop git commit: YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main() should support generic options. Contributed by Konstantin Shvachko.

2015-02-27 Thread zjshen
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 41a08ad40 - bf08f7f0e


YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main() should 
support generic options. Contributed by Konstantin Shvachko.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8ca0d957
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8ca0d957
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8ca0d957

Branch: refs/heads/YARN-2928
Commit: 8ca0d957c4b1076e801e1cdce5b44aa805de889c
Parents: bfbf076
Author: Konstantin V Shvachko s...@apache.org
Authored: Thu Feb 26 17:12:19 2015 -0800
Committer: Konstantin V Shvachko s...@apache.org
Committed: Thu Feb 26 17:12:19 2015 -0800

--
 .../java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java | 2 ++
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/yarn/server/nodemanager/NodeManager.java   | 4 +++-
 .../hadoop/yarn/server/resourcemanager/ResourceManager.java  | 3 +++
 .../apache/hadoop/yarn/server/webproxy/WebAppProxyServer.java| 2 ++
 5 files changed, 13 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8ca0d957/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
index 6d58040..252ac55 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistoryServer.java
@@ -37,6 +37,7 @@ import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.service.AbstractService;
 import org.apache.hadoop.service.CompositeService;
 import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.GenericOptionsParser;
 import org.apache.hadoop.util.ShutdownHookManager;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
@@ -216,6 +217,7 @@ public class JobHistoryServer extends CompositeService {
   new CompositeServiceShutdownHook(jobHistoryServer),
   SHUTDOWN_HOOK_PRIORITY);
   YarnConfiguration conf = new YarnConfiguration(new JobConf());
+  new GenericOptionsParser(conf, args);
   jobHistoryServer.init(conf);
   jobHistoryServer.start();
 } catch (Throwable t) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8ca0d957/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a635592..40f187b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -330,6 +330,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3217. Remove httpclient dependency from hadoop-yarn-server-web-proxy.
 (Brahma Reddy Battula via ozawa).
 
+YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main()
+should support generic options. (shv)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8ca0d957/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
index 7584138..a4be120 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
@@ -37,6 +37,7 @@ import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.service.CompositeService;
+import 

[7/8] hadoop git commit: Merge remote-tracking branch 'apache/trunk' into YARN-2928

2015-02-27 Thread zjshen
Merge remote-tracking branch 'apache/trunk' into YARN-2928


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/667f69c3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/667f69c3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/667f69c3

Branch: refs/heads/YARN-2928
Commit: 667f69c3627f12b45ee398e70a9055b5e31a8a86
Parents: 41a08ad 01a1621
Author: Zhijie Shen zjs...@apache.org
Authored: Fri Feb 27 08:42:37 2015 -0800
Committer: Zhijie Shen zjs...@apache.org
Committed: Fri Feb 27 08:42:37 2015 -0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../main/java/org/apache/hadoop/io/MapFile.java | 143 +
 .../java/org/apache/hadoop/io/TestMapFile.java  |  56 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   9 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |   3 +-
 .../hadoop/hdfs/server/datanode/DataNode.java   |   2 +-
 .../hdfs/server/datanode/DirectoryScanner.java  |  12 +-
 .../apache/hadoop/hdfs/TestDFSOutputStream.java |  31 ++
 .../server/datanode/TestDirectoryScanner.java   |   9 +-
 .../src/test/resources/testHDFSConf.xml |   4 +-
 .../mapreduce/v2/hs/JobHistoryServer.java   |   2 +
 hadoop-yarn-project/CHANGES.txt |   6 +
 .../hadoop/yarn/conf/YarnConfiguration.java |   9 +
 .../src/main/resources/yarn-default.xml |  15 +
 .../yarn/server/nodemanager/NodeManager.java|   4 +-
 .../server/resourcemanager/ResourceManager.java |   3 +
 .../recovery/FileSystemRMStateStore.java| 303 ++-
 .../recovery/TestFSRMStateStore.java|   5 +
 .../yarn/server/webproxy/WebAppProxyServer.java |   2 +
 19 files changed, 537 insertions(+), 84 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/667f69c3/hadoop-yarn-project/CHANGES.txt
--



[3/8] hadoop git commit: Revert HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.

2015-02-27 Thread zjshen
Revert HDFS-7769. TestHDFSCLI should not create files in hdfs project root 
dir.

This reverts commit 7c6b6547eeed110e1a842e503bfd33afe04fa814.

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2954e654
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2954e654
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2954e654

Branch: refs/heads/YARN-2928
Commit: 2954e654677bd1807d22fae7becc4464d9eff00b
Parents: 48c7ee7
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Fri Feb 27 18:25:32 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Fri Feb 27 18:25:32 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 ---
 .../hadoop-hdfs/src/test/resources/testHDFSConf.xml  | 4 ++--
 2 files changed, 2 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2954e654/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ae83898..ba553dc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -975,9 +975,6 @@ Release 2.7.0 - UNRELEASED
 HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause
 DataNode to register successfully with only one NameNode.(vinayakumarb)
 
-HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.
-(szetszwo)
-
 HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage.
 (Rakesh R and shv)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2954e654/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
index 2d3de1f..e59b05a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
@@ -16483,8 +16483,8 @@
 command-fs NAMENODE -mkdir -p /user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data15bytes 
/user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data30bytes 
/user/USERNAME/dir1/command
-command-fs NAMENODE -getmerge /user/USERNAME/dir1 
CLITEST_DATA/file/command
-command-cat CLITEST_DATA/file/command
+command-fs NAMENODE -getmerge /user/USERNAME/dir1 data/command
+command-cat data/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm -r /user/USERNAME/command



[6/8] hadoop git commit: YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail due to IOException. Contributed by Zhihai Xu.

2015-02-27 Thread zjshen
YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail due to 
IOException. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/01a16219
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/01a16219
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/01a16219

Branch: refs/heads/YARN-2928
Commit: 01a1621930df17a745dd37892996c68fca3447d1
Parents: a979f3b
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Feb 28 00:56:44 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Feb 28 00:56:44 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |   9 +
 .../src/main/resources/yarn-default.xml |  15 +
 .../recovery/FileSystemRMStateStore.java| 303 ++-
 .../recovery/TestFSRMStateStore.java|   5 +
 5 files changed, 265 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 40f187b..38dd9fa 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -333,6 +333,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main()
 should support generic options. (shv)
 
+YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail 
+due to IOException. (Zhihai Xu via ozawa)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 05c6cbf..ff06eea 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -508,6 +508,15 @@ public class YarnConfiguration extends Configuration {
   public static final String DEFAULT_FS_RM_STATE_STORE_RETRY_POLICY_SPEC =
   2000, 500;
 
+  public static final String FS_RM_STATE_STORE_NUM_RETRIES =
+  RM_PREFIX + fs.state-store.num-retries;
+  public static final int DEFAULT_FS_RM_STATE_STORE_NUM_RETRIES = 0;
+
+  public static final String FS_RM_STATE_STORE_RETRY_INTERVAL_MS =
+  RM_PREFIX + fs.state-store.retry-interval-ms;
+  public static final long DEFAULT_FS_RM_STATE_STORE_RETRY_INTERVAL_MS =
+  1000L;
+
   public static final String RM_LEVELDB_STORE_PATH = RM_PREFIX
   + leveldb-state-store.path;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index a7958a5..df730d5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -420,6 +420,21 @@
   /property
 
   property
+descriptionthe number of retries to recover from IOException in
+FileSystemRMStateStore.
+/description
+nameyarn.resourcemanager.fs.state-store.num-retries/name
+value0/value
+  /property
+
+  property
+descriptionRetry interval in milliseconds in FileSystemRMStateStore.
+/description
+nameyarn.resourcemanager.fs.state-store.retry-interval-ms/name
+value1000/value
+  /property
+
+  property
 descriptionLocal path where the RM state will be stored when using
 org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore
 as the value for yarn.resourcemanager.store.class/description

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
--
diff --git 

[2/8] hadoop git commit: HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile. Contributed by Vinayakumar B.

2015-02-27 Thread zjshen
HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48c7ee75
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48c7ee75
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48c7ee75

Branch: refs/heads/YARN-2928
Commit: 48c7ee7553af94a57952bca03b49c04b9bbfab45
Parents: 8ca0d95
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Fri Feb 27 17:46:07 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Fri Feb 27 17:46:07 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../main/java/org/apache/hadoop/io/MapFile.java | 143 +++
 .../java/org/apache/hadoop/io/TestMapFile.java  |  56 
 3 files changed, 202 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c7ee75/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 1d9a6d4..6d4da77 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -445,6 +445,9 @@ Release 2.7.0 - UNRELEASED
 
 HADOOP-11510. Expose truncate API via FileContext. (yliu)
 
+HADOOP-11569. Provide Merge API for MapFile to merge multiple similar 
MapFiles
+to one MapFile. (Vinayakumar B via ozawa)
+
   IMPROVEMENTS
 
 HADOOP-11483. HardLink.java should use the jdk7 createLink method 
(aajisaka)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c7ee75/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
index 84c9dcc..ee76458 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
@@ -25,6 +25,7 @@ import java.util.Arrays;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -824,6 +825,148 @@ public class MapFile {
 return cnt;
   }
 
+  /**
+   * Class to merge multiple MapFiles of same Key and Value types to one 
MapFile
+   */
+  public static class Merger {
+private Configuration conf;
+private WritableComparator comparator = null;
+private Reader[] inReaders;
+private Writer outWriter;
+private ClassWritable valueClass = null;
+private ClassWritableComparable keyClass = null;
+
+public Merger(Configuration conf) throws IOException {
+  this.conf = conf;
+}
+
+/**
+ * Merge multiple MapFiles to one Mapfile
+ *
+ * @param inMapFiles
+ * @param outMapFile
+ * @throws IOException
+ */
+public void merge(Path[] inMapFiles, boolean deleteInputs,
+Path outMapFile) throws IOException {
+  try {
+open(inMapFiles, outMapFile);
+mergePass();
+  } finally {
+close();
+  }
+  if (deleteInputs) {
+for (int i = 0; i  inMapFiles.length; i++) {
+  Path path = inMapFiles[i];
+  delete(path.getFileSystem(conf), path.toString());
+}
+  }
+}
+
+/*
+ * Open all input files for reading and verify the key and value types. And
+ * open Output file for writing
+ */
+@SuppressWarnings(unchecked)
+private void open(Path[] inMapFiles, Path outMapFile) throws IOException {
+  inReaders = new Reader[inMapFiles.length];
+  for (int i = 0; i  inMapFiles.length; i++) {
+Reader reader = new Reader(inMapFiles[i], conf);
+if (keyClass == null || valueClass == null) {
+  keyClass = (ClassWritableComparable) reader.getKeyClass();
+  valueClass = (ClassWritable) reader.getValueClass();
+} else if (keyClass != reader.getKeyClass()
+|| valueClass != reader.getValueClass()) {
+  throw new HadoopIllegalArgumentException(
+  Input files cannot be merged as they
+  +  have different Key and Value classes);
+}
+inReaders[i] = reader;
+  }
+
+  if (comparator == null) {
+Class? extends WritableComparable cls;
+cls = keyClass.asSubclass(WritableComparable.class);
+

[5/8] hadoop git commit: HDFS-7308. Change the packet chunk size computation in DFSOutputStream in order to enforce packet size = 64kB. Contributed by Takuya Fukudome

2015-02-27 Thread zjshen
HDFS-7308. Change the packet chunk size computation in DFSOutputStream in order 
to enforce packet size = 64kB.  Contributed by Takuya Fukudome


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a979f3b5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a979f3b5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a979f3b5

Branch: refs/heads/YARN-2928
Commit: a979f3b58fafebbd6118ec1f861cf3f62c59c9cb
Parents: 4f75b15
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Fri Feb 27 23:45:37 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Fri Feb 27 23:45:37 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  3 +-
 .../apache/hadoop/hdfs/TestDFSOutputStream.java | 31 
 3 files changed, 36 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a979f3b5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 8556afd..b2422d6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -682,6 +682,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7819. Log WARN message for the blocks which are not in Block ID based
 layout (Rakesh R via Colin P. McCabe)
 
+HDFS-7308. Change the packet chunk size computation in DFSOutputStream in
+order to enforce packet size = 64kB.  (Takuya Fukudome via szetszwo)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a979f3b5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 9d7dca9..b3e8c97 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -1851,8 +1851,9 @@ public class DFSOutputStream extends FSOutputSummer
   }
 
   private void computePacketChunkSize(int psize, int csize) {
+final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN;
 final int chunkSize = csize + getChecksumSize();
-chunksPerPacket = Math.max(psize/chunkSize, 1);
+chunksPerPacket = Math.max(bodySize/chunkSize, 1);
 packetSize = chunkSize*chunksPerPacket;
 if (DFSClient.LOG.isDebugEnabled()) {
   DFSClient.LOG.debug(computePacketChunkSize: src= + src +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a979f3b5/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
index 678a3b8..7269e39 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
@@ -18,6 +18,8 @@
 package org.apache.hadoop.hdfs;
 
 import java.io.IOException;
+import java.lang.reflect.Field;
+import java.lang.reflect.Method;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.hadoop.conf.Configuration;
@@ -66,6 +68,35 @@ public class TestDFSOutputStream {
 dos.close();
   }
 
+  /**
+   * The computePacketChunkSize() method of DFSOutputStream should set the 
actual
+   * packet size  64kB. See HDFS-7308 for details.
+   */
+  @Test
+  public void testComputePacketChunkSize()
+  throws Exception {
+DistributedFileSystem fs = cluster.getFileSystem();
+FSDataOutputStream os = fs.create(new Path(/test));
+DFSOutputStream dos = (DFSOutputStream) Whitebox.getInternalState(os,
+wrappedStream);
+
+final int packetSize = 64*1024;
+final int bytesPerChecksum = 512;
+
+Method method = dos.getClass().getDeclaredMethod(computePacketChunkSize,
+int.class, int.class);
+method.setAccessible(true);
+method.invoke(dos, packetSize, bytesPerChecksum);
+
+Field field = dos.getClass().getDeclaredField(packetSize);
+field.setAccessible(true);
+
+Assert.assertTrue((Integer) field.get(dos) + 33  packetSize);
+// If PKT_MAX_HEADER_LEN is 257, actual 

hadoop git commit: HDFS-7308. Change the packet chunk size computation in DFSOutputStream in order to enforce packet size = 64kB. Contributed by Takuya Fukudome

2015-02-27 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4f75b1562 - a979f3b58


HDFS-7308. Change the packet chunk size computation in DFSOutputStream in order 
to enforce packet size = 64kB.  Contributed by Takuya Fukudome


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a979f3b5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a979f3b5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a979f3b5

Branch: refs/heads/trunk
Commit: a979f3b58fafebbd6118ec1f861cf3f62c59c9cb
Parents: 4f75b15
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Fri Feb 27 23:45:37 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Fri Feb 27 23:45:37 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  3 +-
 .../apache/hadoop/hdfs/TestDFSOutputStream.java | 31 
 3 files changed, 36 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a979f3b5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 8556afd..b2422d6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -682,6 +682,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7819. Log WARN message for the blocks which are not in Block ID based
 layout (Rakesh R via Colin P. McCabe)
 
+HDFS-7308. Change the packet chunk size computation in DFSOutputStream in
+order to enforce packet size = 64kB.  (Takuya Fukudome via szetszwo)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a979f3b5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 9d7dca9..b3e8c97 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -1851,8 +1851,9 @@ public class DFSOutputStream extends FSOutputSummer
   }
 
   private void computePacketChunkSize(int psize, int csize) {
+final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN;
 final int chunkSize = csize + getChecksumSize();
-chunksPerPacket = Math.max(psize/chunkSize, 1);
+chunksPerPacket = Math.max(bodySize/chunkSize, 1);
 packetSize = chunkSize*chunksPerPacket;
 if (DFSClient.LOG.isDebugEnabled()) {
   DFSClient.LOG.debug(computePacketChunkSize: src= + src +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a979f3b5/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
index 678a3b8..7269e39 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
@@ -18,6 +18,8 @@
 package org.apache.hadoop.hdfs;
 
 import java.io.IOException;
+import java.lang.reflect.Field;
+import java.lang.reflect.Method;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.hadoop.conf.Configuration;
@@ -66,6 +68,35 @@ public class TestDFSOutputStream {
 dos.close();
   }
 
+  /**
+   * The computePacketChunkSize() method of DFSOutputStream should set the 
actual
+   * packet size  64kB. See HDFS-7308 for details.
+   */
+  @Test
+  public void testComputePacketChunkSize()
+  throws Exception {
+DistributedFileSystem fs = cluster.getFileSystem();
+FSDataOutputStream os = fs.create(new Path(/test));
+DFSOutputStream dos = (DFSOutputStream) Whitebox.getInternalState(os,
+wrappedStream);
+
+final int packetSize = 64*1024;
+final int bytesPerChecksum = 512;
+
+Method method = dos.getClass().getDeclaredMethod(computePacketChunkSize,
+int.class, int.class);
+method.setAccessible(true);
+method.invoke(dos, packetSize, bytesPerChecksum);
+
+Field field = dos.getClass().getDeclaredField(packetSize);
+field.setAccessible(true);
+
+Assert.assertTrue((Integer) 

hadoop git commit: YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail due to IOException. Contributed by Zhihai Xu.

2015-02-27 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 657b027bb - 79f73f461


YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail due to 
IOException. Contributed by Zhihai Xu.

(cherry picked from commit 01a1621930df17a745dd37892996c68fca3447d1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79f73f46
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79f73f46
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79f73f46

Branch: refs/heads/branch-2
Commit: 79f73f461362d6d574e248f65d1e0dc6e895524a
Parents: 657b027
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Feb 28 00:56:44 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Feb 28 00:57:01 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |   9 +
 .../src/main/resources/yarn-default.xml |  15 +
 .../recovery/FileSystemRMStateStore.java| 303 ++-
 .../recovery/TestFSRMStateStore.java|   5 +
 5 files changed, 265 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79f73f46/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 801192a..b016cbb 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -294,6 +294,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main()
 should support generic options. (shv)
 
+YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail 
+due to IOException. (Zhihai Xu via ozawa)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79f73f46/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 544ae1b..8cc7ad7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -508,6 +508,15 @@ public class YarnConfiguration extends Configuration {
   public static final String DEFAULT_FS_RM_STATE_STORE_RETRY_POLICY_SPEC =
   2000, 500;
 
+  public static final String FS_RM_STATE_STORE_NUM_RETRIES =
+  RM_PREFIX + fs.state-store.num-retries;
+  public static final int DEFAULT_FS_RM_STATE_STORE_NUM_RETRIES = 0;
+
+  public static final String FS_RM_STATE_STORE_RETRY_INTERVAL_MS =
+  RM_PREFIX + fs.state-store.retry-interval-ms;
+  public static final long DEFAULT_FS_RM_STATE_STORE_RETRY_INTERVAL_MS =
+  1000L;
+
   public static final String RM_LEVELDB_STORE_PATH = RM_PREFIX
   + leveldb-state-store.path;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79f73f46/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 0a1d3db..f311f16 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -420,6 +420,21 @@
   /property
 
   property
+descriptionthe number of retries to recover from IOException in
+FileSystemRMStateStore.
+/description
+nameyarn.resourcemanager.fs.state-store.num-retries/name
+value0/value
+  /property
+
+  property
+descriptionRetry interval in milliseconds in FileSystemRMStateStore.
+/description
+nameyarn.resourcemanager.fs.state-store.retry-interval-ms/name
+value1000/value
+  /property
+
+  property
 descriptionLocal path where the RM state will be stored when using
 org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore
 as the value for yarn.resourcemanager.store.class/description


hadoop git commit: YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail due to IOException. Contributed by Zhihai Xu.

2015-02-27 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk a979f3b58 - 01a162193


YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail due to 
IOException. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/01a16219
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/01a16219
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/01a16219

Branch: refs/heads/trunk
Commit: 01a1621930df17a745dd37892996c68fca3447d1
Parents: a979f3b
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Feb 28 00:56:44 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Feb 28 00:56:44 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |   9 +
 .../src/main/resources/yarn-default.xml |  15 +
 .../recovery/FileSystemRMStateStore.java| 303 ++-
 .../recovery/TestFSRMStateStore.java|   5 +
 5 files changed, 265 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 40f187b..38dd9fa 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -333,6 +333,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3255. RM, NM, JobHistoryServer, and WebAppProxyServer's main()
 should support generic options. (shv)
 
+YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail 
+due to IOException. (Zhihai Xu via ozawa)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 05c6cbf..ff06eea 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -508,6 +508,15 @@ public class YarnConfiguration extends Configuration {
   public static final String DEFAULT_FS_RM_STATE_STORE_RETRY_POLICY_SPEC =
   2000, 500;
 
+  public static final String FS_RM_STATE_STORE_NUM_RETRIES =
+  RM_PREFIX + fs.state-store.num-retries;
+  public static final int DEFAULT_FS_RM_STATE_STORE_NUM_RETRIES = 0;
+
+  public static final String FS_RM_STATE_STORE_RETRY_INTERVAL_MS =
+  RM_PREFIX + fs.state-store.retry-interval-ms;
+  public static final long DEFAULT_FS_RM_STATE_STORE_RETRY_INTERVAL_MS =
+  1000L;
+
   public static final String RM_LEVELDB_STORE_PATH = RM_PREFIX
   + leveldb-state-store.path;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index a7958a5..df730d5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -420,6 +420,21 @@
   /property
 
   property
+descriptionthe number of retries to recover from IOException in
+FileSystemRMStateStore.
+/description
+nameyarn.resourcemanager.fs.state-store.num-retries/name
+value0/value
+  /property
+
+  property
+descriptionRetry interval in milliseconds in FileSystemRMStateStore.
+/description
+nameyarn.resourcemanager.fs.state-store.retry-interval-ms/name
+value1000/value
+  /property
+
+  property
 descriptionLocal path where the RM state will be stored when using
 org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore
 as the value for yarn.resourcemanager.store.class/description

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01a16219/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java

hadoop git commit: HDFS-7308. Change the packet chunk size computation in DFSOutputStream in order to enforce packet size = 64kB. Contributed by Takuya Fukudome

2015-02-27 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 bc60404ea - 657b027bb


HDFS-7308. Change the packet chunk size computation in DFSOutputStream in order 
to enforce packet size = 64kB.  Contributed by Takuya Fukudome


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/657b027b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/657b027b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/657b027b

Branch: refs/heads/branch-2
Commit: 657b027bb2be3ae80c2eb7bc5272a33b8f029f08
Parents: bc60404
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Fri Feb 27 23:45:37 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Fri Feb 27 23:46:42 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  3 +-
 .../apache/hadoop/hdfs/TestDFSOutputStream.java | 31 
 3 files changed, 36 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/657b027b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 998715e..e347aba 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -384,6 +384,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7819. Log WARN message for the blocks which are not in Block ID based
 layout (Rakesh R via Colin P. McCabe)
 
+HDFS-7308. Change the packet chunk size computation in DFSOutputStream in
+order to enforce packet size = 64kB.  (Takuya Fukudome via szetszwo)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/657b027b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 4d86c43..14d39c6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -1851,8 +1851,9 @@ public class DFSOutputStream extends FSOutputSummer
   }
 
   private void computePacketChunkSize(int psize, int csize) {
+final int bodySize = psize - PacketHeader.PKT_MAX_HEADER_LEN;
 final int chunkSize = csize + getChecksumSize();
-chunksPerPacket = Math.max(psize/chunkSize, 1);
+chunksPerPacket = Math.max(bodySize/chunkSize, 1);
 packetSize = chunkSize*chunksPerPacket;
 if (DFSClient.LOG.isDebugEnabled()) {
   DFSClient.LOG.debug(computePacketChunkSize: src= + src +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/657b027b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
index 678a3b8..7269e39 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
@@ -18,6 +18,8 @@
 package org.apache.hadoop.hdfs;
 
 import java.io.IOException;
+import java.lang.reflect.Field;
+import java.lang.reflect.Method;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.hadoop.conf.Configuration;
@@ -66,6 +68,35 @@ public class TestDFSOutputStream {
 dos.close();
   }
 
+  /**
+   * The computePacketChunkSize() method of DFSOutputStream should set the 
actual
+   * packet size  64kB. See HDFS-7308 for details.
+   */
+  @Test
+  public void testComputePacketChunkSize()
+  throws Exception {
+DistributedFileSystem fs = cluster.getFileSystem();
+FSDataOutputStream os = fs.create(new Path(/test));
+DFSOutputStream dos = (DFSOutputStream) Whitebox.getInternalState(os,
+wrappedStream);
+
+final int packetSize = 64*1024;
+final int bytesPerChecksum = 512;
+
+Method method = dos.getClass().getDeclaredMethod(computePacketChunkSize,
+int.class, int.class);
+method.setAccessible(true);
+method.invoke(dos, packetSize, bytesPerChecksum);
+
+Field field = dos.getClass().getDeclaredField(packetSize);
+field.setAccessible(true);
+
+

hadoop git commit: HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in hdfs-default.xml. Contributed by Kai Sasaki.

2015-02-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 01a162193 - 8719cdd4f


HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in 
hdfs-default.xml. Contributed by Kai Sasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8719cdd4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8719cdd4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8719cdd4

Branch: refs/heads/trunk
Commit: 8719cdd4f68abb91bf9459bca2a5467dafb6b5ae
Parents: 01a1621
Author: Akira Ajisaka aajis...@apache.org
Authored: Fri Feb 27 12:17:34 2015 -0800
Committer: Akira Ajisaka aajis...@apache.org
Committed: Fri Feb 27 12:17:34 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml  | 11 +++
 2 files changed, 14 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8719cdd4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b2422d6..b4b0087 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -685,6 +685,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7308. Change the packet chunk size computation in DFSOutputStream in
 order to enforce packet size = 64kB.  (Takuya Fukudome via szetszwo)
 
+HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in
+hdfs-default.xml. (Kai Sasaki via aajisaka)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8719cdd4/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 85d2273..66fe86c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -145,6 +145,17 @@
 /property
 
 property
+  namedfs.namenode.heartbeat.recheck-interval/name
+  value30/value
+  description
+This time decides the interval to check for expired datanodes.
+With this value and dfs.heartbeat.interval, the interval of
+deciding the datanode is stale or not is also calculated.
+The unit of this configuration is millisecond.
+  /description
+/property
+
+property
   namedfs.http.policy/name
   valueHTTP_ONLY/value
   descriptionDecide if HTTPS(SSL) is supported on HDFS



hadoop git commit: HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in hdfs-default.xml. Contributed by Kai Sasaki.

2015-02-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 79f73f461 - 0f9289e84


HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in 
hdfs-default.xml. Contributed by Kai Sasaki.

(cherry picked from commit 8719cdd4f68abb91bf9459bca2a5467dafb6b5ae)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0f9289e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0f9289e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0f9289e8

Branch: refs/heads/branch-2
Commit: 0f9289e848ea0ea448c32534ca0c105654218c18
Parents: 79f73f4
Author: Akira Ajisaka aajis...@apache.org
Authored: Fri Feb 27 12:17:34 2015 -0800
Committer: Akira Ajisaka aajis...@apache.org
Committed: Fri Feb 27 12:18:46 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml  | 11 +++
 2 files changed, 14 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f9289e8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e347aba..e1e7dcd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -387,6 +387,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7308. Change the packet chunk size computation in DFSOutputStream in
 order to enforce packet size = 64kB.  (Takuya Fukudome via szetszwo)
 
+HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in
+hdfs-default.xml. (Kai Sasaki via aajisaka)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f9289e8/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 2981db2..16976dd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -145,6 +145,17 @@
 /property
 
 property
+  namedfs.namenode.heartbeat.recheck-interval/name
+  value30/value
+  description
+This time decides the interval to check for expired datanodes.
+With this value and dfs.heartbeat.interval, the interval of
+deciding the datanode is stale or not is also calculated.
+The unit of this configuration is millisecond.
+  /description
+/property
+
+property
   namedfs.http.policy/name
   valueHTTP_ONLY/value
   descriptionDecide if HTTPS(SSL) is supported on HDFS



hadoop git commit: HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile. Contributed by Vinayakumar B.

2015-02-27 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9e67f2cb0 - 02df51497


HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B.

(cherry picked from commit 48c7ee7553af94a57952bca03b49c04b9bbfab45)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02df5149
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02df5149
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02df5149

Branch: refs/heads/branch-2
Commit: 02df51497fdb60953c58e355aeab9106c6e78203
Parents: 9e67f2c
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Fri Feb 27 17:46:07 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Fri Feb 27 17:46:29 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../main/java/org/apache/hadoop/io/MapFile.java | 143 +++
 .../java/org/apache/hadoop/io/TestMapFile.java  |  56 
 3 files changed, 202 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02df5149/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e81ab67..c3a54df 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -33,6 +33,9 @@ Release 2.7.0 - UNRELEASED
 
 HADOOP-11510. Expose truncate API via FileContext. (yliu)
 
+HADOOP-11569. Provide Merge API for MapFile to merge multiple similar 
MapFiles
+to one MapFile. (Vinayakumar B via ozawa)
+
   IMPROVEMENTS
 
 HADOOP-11483. HardLink.java should use the jdk7 createLink method 
(aajisaka)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02df5149/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
index 84c9dcc..ee76458 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
@@ -25,6 +25,7 @@ import java.util.Arrays;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -824,6 +825,148 @@ public class MapFile {
 return cnt;
   }
 
+  /**
+   * Class to merge multiple MapFiles of same Key and Value types to one 
MapFile
+   */
+  public static class Merger {
+private Configuration conf;
+private WritableComparator comparator = null;
+private Reader[] inReaders;
+private Writer outWriter;
+private ClassWritable valueClass = null;
+private ClassWritableComparable keyClass = null;
+
+public Merger(Configuration conf) throws IOException {
+  this.conf = conf;
+}
+
+/**
+ * Merge multiple MapFiles to one Mapfile
+ *
+ * @param inMapFiles
+ * @param outMapFile
+ * @throws IOException
+ */
+public void merge(Path[] inMapFiles, boolean deleteInputs,
+Path outMapFile) throws IOException {
+  try {
+open(inMapFiles, outMapFile);
+mergePass();
+  } finally {
+close();
+  }
+  if (deleteInputs) {
+for (int i = 0; i  inMapFiles.length; i++) {
+  Path path = inMapFiles[i];
+  delete(path.getFileSystem(conf), path.toString());
+}
+  }
+}
+
+/*
+ * Open all input files for reading and verify the key and value types. And
+ * open Output file for writing
+ */
+@SuppressWarnings(unchecked)
+private void open(Path[] inMapFiles, Path outMapFile) throws IOException {
+  inReaders = new Reader[inMapFiles.length];
+  for (int i = 0; i  inMapFiles.length; i++) {
+Reader reader = new Reader(inMapFiles[i], conf);
+if (keyClass == null || valueClass == null) {
+  keyClass = (ClassWritableComparable) reader.getKeyClass();
+  valueClass = (ClassWritable) reader.getValueClass();
+} else if (keyClass != reader.getKeyClass()
+|| valueClass != reader.getValueClass()) {
+  throw new HadoopIllegalArgumentException(
+  Input files cannot be merged as they
+  +  have different Key and Value classes);
+}
+inReaders[i] = reader;
+  }
+
+  

hadoop git commit: HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles to one MapFile. Contributed by Vinayakumar B.

2015-02-27 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8ca0d957c - 48c7ee755


HADOOP-11569. Provide Merge API for MapFile to merge multiple similar MapFiles 
to one MapFile. Contributed by Vinayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48c7ee75
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48c7ee75
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48c7ee75

Branch: refs/heads/trunk
Commit: 48c7ee7553af94a57952bca03b49c04b9bbfab45
Parents: 8ca0d95
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Fri Feb 27 17:46:07 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Fri Feb 27 17:46:07 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../main/java/org/apache/hadoop/io/MapFile.java | 143 +++
 .../java/org/apache/hadoop/io/TestMapFile.java  |  56 
 3 files changed, 202 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c7ee75/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 1d9a6d4..6d4da77 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -445,6 +445,9 @@ Release 2.7.0 - UNRELEASED
 
 HADOOP-11510. Expose truncate API via FileContext. (yliu)
 
+HADOOP-11569. Provide Merge API for MapFile to merge multiple similar 
MapFiles
+to one MapFile. (Vinayakumar B via ozawa)
+
   IMPROVEMENTS
 
 HADOOP-11483. HardLink.java should use the jdk7 createLink method 
(aajisaka)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c7ee75/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
index 84c9dcc..ee76458 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/MapFile.java
@@ -25,6 +25,7 @@ import java.util.Arrays;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -824,6 +825,148 @@ public class MapFile {
 return cnt;
   }
 
+  /**
+   * Class to merge multiple MapFiles of same Key and Value types to one 
MapFile
+   */
+  public static class Merger {
+private Configuration conf;
+private WritableComparator comparator = null;
+private Reader[] inReaders;
+private Writer outWriter;
+private ClassWritable valueClass = null;
+private ClassWritableComparable keyClass = null;
+
+public Merger(Configuration conf) throws IOException {
+  this.conf = conf;
+}
+
+/**
+ * Merge multiple MapFiles to one Mapfile
+ *
+ * @param inMapFiles
+ * @param outMapFile
+ * @throws IOException
+ */
+public void merge(Path[] inMapFiles, boolean deleteInputs,
+Path outMapFile) throws IOException {
+  try {
+open(inMapFiles, outMapFile);
+mergePass();
+  } finally {
+close();
+  }
+  if (deleteInputs) {
+for (int i = 0; i  inMapFiles.length; i++) {
+  Path path = inMapFiles[i];
+  delete(path.getFileSystem(conf), path.toString());
+}
+  }
+}
+
+/*
+ * Open all input files for reading and verify the key and value types. And
+ * open Output file for writing
+ */
+@SuppressWarnings(unchecked)
+private void open(Path[] inMapFiles, Path outMapFile) throws IOException {
+  inReaders = new Reader[inMapFiles.length];
+  for (int i = 0; i  inMapFiles.length; i++) {
+Reader reader = new Reader(inMapFiles[i], conf);
+if (keyClass == null || valueClass == null) {
+  keyClass = (ClassWritableComparable) reader.getKeyClass();
+  valueClass = (ClassWritable) reader.getValueClass();
+} else if (keyClass != reader.getKeyClass()
+|| valueClass != reader.getValueClass()) {
+  throw new HadoopIllegalArgumentException(
+  Input files cannot be merged as they
+  +  have different Key and Value classes);
+}
+inReaders[i] = reader;
+  }
+
+  if (comparator == null) {
+Class? extends WritableComparable cls;

hadoop git commit: HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get files list for scanning (Contributed by J.Andreina)

2015-02-27 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2954e6546 - 4f75b1562


HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get files 
list for scanning (Contributed by J.Andreina)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4f75b156
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4f75b156
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4f75b156

Branch: refs/heads/trunk
Commit: 4f75b15628a76881efc39054612dc128e23d27be
Parents: 2954e65
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Feb 27 16:36:28 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Feb 27 16:36:28 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../apache/hadoop/hdfs/server/datanode/DataNode.java|  2 +-
 .../hadoop/hdfs/server/datanode/DirectoryScanner.java   | 12 +---
 .../hdfs/server/datanode/TestDirectoryScanner.java  |  9 ++---
 4 files changed, 19 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f75b156/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ba553dc..8556afd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1040,6 +1040,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7774. Unresolved symbols error while compiling HDFS on Windows 7/32 
bit.
 (Kiran Kumar M R via cnauroth)
 
+HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get
+files list for scanning (J.Andreina via vinayakumarb)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f75b156/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index f233e02..92ddb7b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -815,7 +815,7 @@ public class DataNode extends ReconfigurableBase
   reason = verifcation is not supported by SimulatedFSDataset;
 } 
 if (reason == null) {
-  directoryScanner = new DirectoryScanner(data, conf);
+  directoryScanner = new DirectoryScanner(this, data, conf);
   directoryScanner.start();
 } else {
   LOG.info(Periodic Directory Tree Verification scan is disabled because 
 +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f75b156/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 09c2914..c7ee21e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -63,6 +63,7 @@ public class DirectoryScanner implements Runnable {
   private final long scanPeriodMsecs;
   private volatile boolean shouldRun = false;
   private boolean retainDiffs = false;
+  private final DataNode datanode;
 
   final ScanInfoPerBlockPool diffs = new ScanInfoPerBlockPool();
   final MapString, Stats stats = new HashMapString, Stats();
@@ -308,7 +309,8 @@ public class DirectoryScanner implements Runnable {
 }
   }
 
-  DirectoryScanner(FsDatasetSpi? dataset, Configuration conf) {
+  DirectoryScanner(DataNode datanode, FsDatasetSpi? dataset, Configuration 
conf) {
+this.datanode = datanode;
 this.dataset = dataset;
 int interval = 
conf.getInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_KEY,
 DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_DEFAULT);
@@ -547,7 +549,7 @@ public class DirectoryScanner implements Runnable {
 for (int i = 0; i  volumes.size(); i++) {
   if (isValid(dataset, volumes.get(i))) {
 ReportCompiler reportCompiler =
-  new ReportCompiler(volumes.get(i));
+  new 

hadoop git commit: HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get files list for scanning (Contributed by J.Andreina)

2015-02-27 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d223a4a59 - bc60404ea


HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get files 
list for scanning (Contributed by J.Andreina)

(cherry picked from commit 4f75b15628a76881efc39054612dc128e23d27be)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bc60404e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bc60404e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bc60404e

Branch: refs/heads/branch-2
Commit: bc60404eaf6e5298c20a552685ec0b6c59b6fd0b
Parents: d223a4a
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Feb 27 16:36:28 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Feb 27 16:37:03 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../apache/hadoop/hdfs/server/datanode/DataNode.java|  2 +-
 .../hadoop/hdfs/server/datanode/DirectoryScanner.java   | 12 +---
 .../hdfs/server/datanode/TestDirectoryScanner.java  |  9 ++---
 4 files changed, 19 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc60404e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 8dd26d4..998715e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -741,6 +741,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7774. Unresolved symbols error while compiling HDFS on Windows 7/32 
bit.
 (Kiran Kumar M R via cnauroth)
 
+HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get
+files list for scanning (J.Andreina via vinayakumarb)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc60404e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index d25e58b..5c516d3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -822,7 +822,7 @@ public class DataNode extends ReconfigurableBase
   reason = verifcation is not supported by SimulatedFSDataset;
 } 
 if (reason == null) {
-  directoryScanner = new DirectoryScanner(data, conf);
+  directoryScanner = new DirectoryScanner(this, data, conf);
   directoryScanner.start();
 } else {
   LOG.info(Periodic Directory Tree Verification scan is disabled because 
 +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc60404e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 09c2914..c7ee21e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -63,6 +63,7 @@ public class DirectoryScanner implements Runnable {
   private final long scanPeriodMsecs;
   private volatile boolean shouldRun = false;
   private boolean retainDiffs = false;
+  private final DataNode datanode;
 
   final ScanInfoPerBlockPool diffs = new ScanInfoPerBlockPool();
   final MapString, Stats stats = new HashMapString, Stats();
@@ -308,7 +309,8 @@ public class DirectoryScanner implements Runnable {
 }
   }
 
-  DirectoryScanner(FsDatasetSpi? dataset, Configuration conf) {
+  DirectoryScanner(DataNode datanode, FsDatasetSpi? dataset, Configuration 
conf) {
+this.datanode = datanode;
 this.dataset = dataset;
 int interval = 
conf.getInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_KEY,
 DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_DEFAULT);
@@ -547,7 +549,7 @@ public class DirectoryScanner implements Runnable {
 for (int i = 0; i  volumes.size(); i++) {
   if (isValid(dataset, volumes.get(i))) {
 ReportCompiler reportCompiler =
-   

hadoop git commit: YARN-3262. Surface application outstanding resource requests table in RM web UI. (Jian He via wangda)

2015-02-27 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/trunk cf51ff2fe - edcecedc1


YARN-3262. Surface application outstanding resource requests table in RM web 
UI. (Jian He via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edcecedc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edcecedc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edcecedc

Branch: refs/heads/trunk
Commit: edcecedc1c39d54db0f86a1325b4db26c38d2d4d
Parents: cf51ff2
Author: Wangda Tan wan...@apache.org
Authored: Fri Feb 27 16:13:32 2015 -0800
Committer: Wangda Tan wan...@apache.org
Committed: Fri Feb 27 16:13:32 2015 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../records/impl/pb/ResourceRequestPBImpl.java  |  4 +-
 .../scheduler/AbstractYarnScheduler.java|  9 
 .../scheduler/AppSchedulingInfo.java| 33 +++---
 .../scheduler/SchedulerApplicationAttempt.java  |  6 ++-
 .../server/resourcemanager/webapp/AppBlock.java | 46 +++-
 .../server/resourcemanager/webapp/AppPage.java  |  4 ++
 .../resourcemanager/webapp/AppsBlock.java   |  5 ++-
 .../webapp/FairSchedulerAppsBlock.java  |  5 ++-
 .../resourcemanager/webapp/RMWebServices.java   |  6 +--
 .../resourcemanager/webapp/dao/AppInfo.java | 17 +++-
 .../webapp/TestRMWebAppFairScheduler.java   | 10 -
 .../webapp/TestRMWebServicesApps.java   |  3 +-
 13 files changed, 118 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 38dd9fa..e7af84b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -336,6 +336,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail 
 due to IOException. (Zhihai Xu via ozawa)
 
+YARN-3262. Surface application outstanding resource requests table 
+in RM web UI. (Jian He via wangda)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
index 0c8491f..27fb5ae 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
@@ -140,13 +140,13 @@ public class ResourceRequestPBImpl extends  
ResourceRequest {
 this.capability = capability;
   }
   @Override
-  public int getNumContainers() {
+  public synchronized int getNumContainers() {
 ResourceRequestProtoOrBuilder p = viaProto ? proto : builder;
 return (p.getNumContainers());
   }
 
   @Override
-  public void setNumContainers(int numContainers) {
+  public synchronized void setNumContainers(int numContainers) {
 maybeInitBuilder();
 builder.setNumContainers((numContainers));
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 04b3452..968a767 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -658,4 +658,13 @@ public abstract class 

hadoop git commit: YARN-3262. Surface application outstanding resource requests table in RM web UI. (Jian He via wangda)

2015-02-27 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c52636df3 - 0b0be0056


YARN-3262. Surface application outstanding resource requests table in RM web 
UI. (Jian He via wangda)

(cherry picked from commit edcecedc1c39d54db0f86a1325b4db26c38d2d4d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0b0be005
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0b0be005
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0b0be005

Branch: refs/heads/branch-2
Commit: 0b0be0056bc6b1b16341ac30c31f833cb3f908df
Parents: c52636d
Author: Wangda Tan wan...@apache.org
Authored: Fri Feb 27 16:13:32 2015 -0800
Committer: Wangda Tan wan...@apache.org
Committed: Fri Feb 27 16:14:35 2015 -0800

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../records/impl/pb/ResourceRequestPBImpl.java  |  4 +-
 .../scheduler/AbstractYarnScheduler.java|  9 
 .../scheduler/AppSchedulingInfo.java| 33 +++---
 .../scheduler/SchedulerApplicationAttempt.java  |  6 ++-
 .../server/resourcemanager/webapp/AppBlock.java | 46 +++-
 .../server/resourcemanager/webapp/AppPage.java  |  4 ++
 .../resourcemanager/webapp/AppsBlock.java   |  5 ++-
 .../webapp/FairSchedulerAppsBlock.java  |  5 ++-
 .../resourcemanager/webapp/RMWebServices.java   |  6 +--
 .../resourcemanager/webapp/dao/AppInfo.java | 17 +++-
 .../webapp/TestRMWebAppFairScheduler.java   | 10 -
 .../webapp/TestRMWebServicesApps.java   |  3 +-
 13 files changed, 118 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b0be005/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b016cbb..eaa8ed4 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -297,6 +297,9 @@ Release 2.7.0 - UNRELEASED
 YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail 
 due to IOException. (Zhihai Xu via ozawa)
 
+YARN-3262. Surface application outstanding resource requests table 
+in RM web UI. (Jian He via wangda)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b0be005/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
index 0c8491f..27fb5ae 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
@@ -140,13 +140,13 @@ public class ResourceRequestPBImpl extends  
ResourceRequest {
 this.capability = capability;
   }
   @Override
-  public int getNumContainers() {
+  public synchronized int getNumContainers() {
 ResourceRequestProtoOrBuilder p = viaProto ? proto : builder;
 return (p.getNumContainers());
   }
 
   @Override
-  public void setNumContainers(int numContainers) {
+  public synchronized void setNumContainers(int numContainers) {
 maybeInitBuilder();
 builder.setNumContainers((numContainers));
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b0be005/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 04b3452..968a767 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 

hadoop git commit: recommit HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir. (cherry picked from commit 7c6b6547eeed110e1a842e503bfd33afe04fa814)

2015-02-27 Thread shv
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8719cdd4f - cf51ff2fe


recommit HDFS-7769. TestHDFSCLI should not create files in hdfs project root 
dir.
(cherry picked from commit 7c6b6547eeed110e1a842e503bfd33afe04fa814)

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cf51ff2f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cf51ff2f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cf51ff2f

Branch: refs/heads/trunk
Commit: cf51ff2fe8f0f08060dd1a9d96dac0c032277f77
Parents: 8719cdd
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue Feb 10 17:48:57 2015 -0800
Committer: Konstantin V Shvachko s...@apache.org
Committed: Fri Feb 27 14:30:41 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../hadoop-hdfs/src/test/resources/testHDFSConf.xml  | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf51ff2f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b4b0087..2a8da43 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -981,6 +981,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause
 DataNode to register successfully with only one NameNode.(vinayakumarb)
 
+HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.
+(szetszwo)
+
 HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage.
 (Rakesh R and shv)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf51ff2f/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
index e59b05a..2d3de1f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
@@ -16483,8 +16483,8 @@
 command-fs NAMENODE -mkdir -p /user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data15bytes 
/user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data30bytes 
/user/USERNAME/dir1/command
-command-fs NAMENODE -getmerge /user/USERNAME/dir1 data/command
-command-cat data/command
+command-fs NAMENODE -getmerge /user/USERNAME/dir1 
CLITEST_DATA/file/command
+command-cat CLITEST_DATA/file/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm -r /user/USERNAME/command



hadoop git commit: recommit HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir. (cherry picked from commit acc172e3718e23ff6808ddcc01543212f1334a27)

2015-02-27 Thread shv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0f9289e84 - c52636df3


recommit HDFS-7769. TestHDFSCLI should not create files in hdfs project root 
dir.
(cherry picked from commit acc172e3718e23ff6808ddcc01543212f1334a27)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c52636df
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c52636df
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c52636df

Branch: refs/heads/branch-2
Commit: c52636df3f3b0a0aa47401269fbd9af811aee308
Parents: 0f9289e
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue Feb 10 17:48:57 2015 -0800
Committer: Konstantin V Shvachko s...@apache.org
Committed: Fri Feb 27 14:20:26 2015 -0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../hadoop-hdfs/src/test/resources/testHDFSConf.xml  | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c52636df/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e1e7dcd..9d5197d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -679,6 +679,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause
 DataNode to register successfully with only one NameNode.(vinayakumarb)
 
+HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.
+(szetszwo)
+
 HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage.
 (Rakesh R and shv)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c52636df/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
index c1cd5c8..7aac849 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
@@ -16293,8 +16293,8 @@
 command-fs NAMENODE -mkdir -p /user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data15bytes 
/user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data30bytes 
/user/USERNAME/dir1/command
-command-fs NAMENODE -getmerge /user/USERNAME/dir1 data/command
-command-cat data/command
+command-fs NAMENODE -getmerge /user/USERNAME/dir1 
CLITEST_DATA/file/command
+command-cat CLITEST_DATA/file/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm -r /user/USERNAME/command



hadoop git commit: Revert HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.

2015-02-27 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/trunk 48c7ee755 - 2954e6546


Revert HDFS-7769. TestHDFSCLI should not create files in hdfs project root 
dir.

This reverts commit 7c6b6547eeed110e1a842e503bfd33afe04fa814.

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2954e654
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2954e654
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2954e654

Branch: refs/heads/trunk
Commit: 2954e654677bd1807d22fae7becc4464d9eff00b
Parents: 48c7ee7
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Fri Feb 27 18:25:32 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Fri Feb 27 18:25:32 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 ---
 .../hadoop-hdfs/src/test/resources/testHDFSConf.xml  | 4 ++--
 2 files changed, 2 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2954e654/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ae83898..ba553dc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -975,9 +975,6 @@ Release 2.7.0 - UNRELEASED
 HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause
 DataNode to register successfully with only one NameNode.(vinayakumarb)
 
-HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.
-(szetszwo)
-
 HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage.
 (Rakesh R and shv)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2954e654/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
index 2d3de1f..e59b05a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
@@ -16483,8 +16483,8 @@
 command-fs NAMENODE -mkdir -p /user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data15bytes 
/user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data30bytes 
/user/USERNAME/dir1/command
-command-fs NAMENODE -getmerge /user/USERNAME/dir1 
CLITEST_DATA/file/command
-command-cat CLITEST_DATA/file/command
+command-fs NAMENODE -getmerge /user/USERNAME/dir1 data/command
+command-cat data/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm -r /user/USERNAME/command



hadoop git commit: Revert HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.

2015-02-27 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 02df51497 - d223a4a59


Revert HDFS-7769. TestHDFSCLI should not create files in hdfs project root 
dir.

This reverts commit acc172e3718e23ff6808ddcc01543212f1334a27.

Conflicts:
hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d223a4a5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d223a4a5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d223a4a5

Branch: refs/heads/branch-2
Commit: d223a4a5943d67a9c9ca142d17e230e80fe578c2
Parents: 02df514
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Fri Feb 27 18:27:19 2015 +0800
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Fri Feb 27 18:27:19 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 ---
 .../hadoop-hdfs/src/test/resources/testHDFSConf.xml  | 4 ++--
 2 files changed, 2 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d223a4a5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 70aad62..8dd26d4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -673,9 +673,6 @@ Release 2.7.0 - UNRELEASED
 HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause
 DataNode to register successfully with only one NameNode.(vinayakumarb)
 
-HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.
-(szetszwo)
-
 HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage.
 (Rakesh R and shv)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d223a4a5/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
index 7aac849..c1cd5c8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
@@ -16293,8 +16293,8 @@
 command-fs NAMENODE -mkdir -p /user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data15bytes 
/user/USERNAME/dir1/command
 command-fs NAMENODE -copyFromLocal CLITEST_DATA/data30bytes 
/user/USERNAME/dir1/command
-command-fs NAMENODE -getmerge /user/USERNAME/dir1 
CLITEST_DATA/file/command
-command-cat CLITEST_DATA/file/command
+command-fs NAMENODE -getmerge /user/USERNAME/dir1 data/command
+command-cat data/command
   /test-commands
   cleanup-commands
 command-fs NAMENODE -rm -r /user/USERNAME/command