[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905348#comment-14905348
 ] 

Hudson commented on YARN-1651:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #436 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/436/])
YARN-1651. CapacityScheduler side changes to support container resize. 
Contributed by Wangda Tan (wangda: rev 89cab1ba5f0671f8ef30dbe7432079c18362b434)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeEventType.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/IncreaseContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/YarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeDecreaseContainerEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/Allocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
* 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905314#comment-14905314
 ] 

Hudson commented on YARN-1651:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8505 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8505/])
YARN-1651. CapacityScheduler side changes to support container resize. 
Contributed by Wangda Tan (wangda: rev 89cab1ba5f0671f8ef30dbe7432079c18362b434)
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/ContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeDecreaseContainerEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/YarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainer.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NodeHeartbeatResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerUpdatesAcquiredEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerChangeResourceEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerEventType.java
* 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905702#comment-14905702
 ] 

Hudson commented on YARN-1651:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2375 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2375/])
YARN-1651. CapacityScheduler side changes to support container resize. 
Contributed by Wangda Tan (wangda: rev 89cab1ba5f0671f8ef30dbe7432079c18362b434)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/YarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NodeHeartbeatResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedContainerChangeRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceCalculator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/ContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeEventType.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplication.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSAssignment.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerChangeResourceEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/Allocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NodeHeartbeatResponse.java
* 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905751#comment-14905751
 ] 

Hudson commented on YARN-1651:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2348 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2348/])
YARN-1651. CapacityScheduler side changes to support container resize. 
Contributed by Wangda Tan (wangda: rev 89cab1ba5f0671f8ef30dbe7432079c18362b434)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeEventType.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/YarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerEventType.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedContainerChangeRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
* 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905421#comment-14905421
 ] 

Hudson commented on YARN-1651:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #429 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/429/])
YARN-1651. CapacityScheduler side changes to support container resize. 
Contributed by Wangda Tan (wangda: rev 89cab1ba5f0671f8ef30dbe7432079c18362b434)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainer.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerChangeResourceEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedContainerChangeRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NodeHeartbeatResponsePBImpl.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/IncreaseContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/NodeHeartbeatResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplication.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905528#comment-14905528
 ] 

Hudson commented on YARN-1651:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1169 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1169/])
YARN-1651. CapacityScheduler side changes to support container resize. 
Contributed by Wangda Tan (wangda: rev 89cab1ba5f0671f8ef30dbe7432079c18362b434)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/Allocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerUpdatesAcquiredEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplication.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/YarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerChangeResourceEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905633#comment-14905633
 ] 

Hudson commented on YARN-1651:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #409 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/409/])
YARN-1651. CapacityScheduler side changes to support container resize. 
Contributed by Wangda Tan (wangda: rev 89cab1ba5f0671f8ef30dbe7432079c18362b434)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/Allocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSCapacityScheduler.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerChangeResourceEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedContainerChangeRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplication.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerUpdatesAcquiredEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/NodeHeartbeatResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeEventType.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
* 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-14 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743646#comment-14743646
 ] 

Wangda Tan commented on YARN-1651:
--

javadocs warnings are not related to this patch, mr test failure tracked by: 
https://issues.apache.org/jira/browse/MAPREDUCE-6475. Findbugs warnings are not 
related to this patch: 
https://issues.apache.org/jira/browse/YARN-1644?focusedCommentId=14739800=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14739800.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-10.YARN-1197.patch, YARN-1651-2.YARN-1197.patch, 
> YARN-1651-3.YARN-1197.patch, YARN-1651-4.YARN-1197.patch, 
> YARN-1651-5.YARN-1197.patch, YARN-1651-6.YARN-1197.patch, 
> YARN-1651-7.YARN-1197.patch, YARN-1651-8.YARN-1197.patch, 
> YARN-1651-9.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-14 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14744806#comment-14744806
 ] 

Wangda Tan commented on YARN-1651:
--

Thanks [~jianhe] and [~mding] for very thorough review!

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: YARN-1197
>
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-10.YARN-1197.patch, YARN-1651-2.YARN-1197.patch, 
> YARN-1651-3.YARN-1197.patch, YARN-1651-4.YARN-1197.patch, 
> YARN-1651-5.YARN-1197.patch, YARN-1651-6.YARN-1197.patch, 
> YARN-1651-7.YARN-1197.patch, YARN-1651-8.YARN-1197.patch, 
> YARN-1651-9.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743002#comment-14743002
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   3m 10s | YARN-1197 compilation may be 
broken. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 23 new or modified test files. |
| {color:green}+1{color} | javac |   8m 10s | There were no new javac warning 
messages. |
| {color:red}-1{color} | javadoc |  10m 16s | The applied patch generated  65  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 16s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  37m  6s | The patch has 177  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   7m 11s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   9m 26s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 54s | Tests passed in 
hadoop-sls. |
| {color:red}-1{color} | yarn tests |   6m 53s | Tests failed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   2m  3s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   0m 26s | Tests passed in 
hadoop-yarn-server-common. |
| {color:green}+1{color} | yarn tests |  59m 52s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | | 150m 26s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| FindBugs | module:hadoop-yarn-server-resourcemanager |
| Failed unit tests | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755652/YARN-1651-9.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / 78ad04d |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/diffJavadocWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9114/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-10.YARN-1197.patch, YARN-1651-2.YARN-1197.patch, 
> YARN-1651-3.YARN-1197.patch, 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14743191#comment-14743191
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   3m 13s | YARN-1197 compilation may be 
broken. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 23 new or modified test files. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:red}-1{color} | javadoc |   9m 54s | The applied patch generated  65  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 17s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  35m 36s | The patch has 177  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   7m  3s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | mapreduce tests |   9m 19s | Tests failed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 52s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |   6m 56s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   2m  0s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   0m 25s | Tests passed in 
hadoop-yarn-server-common. |
| {color:green}+1{color} | yarn tests |  55m 37s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | | 143m 38s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| FindBugs | module:hadoop-yarn-server-resourcemanager |
| Failed unit tests | hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755671/YARN-1651-10.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / 78ad04d |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/diffJavadocWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9117/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-10.YARN-1197.patch, YARN-1651-2.YARN-1197.patch, 
> YARN-1651-3.YARN-1197.patch, 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14742978#comment-14742978
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   2m 59s | YARN-1197 compilation may be 
broken. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 23 new or modified test files. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:red}-1{color} | javadoc |  10m  1s | The applied patch generated  65  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 16s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  51m 21s | The patch has 177  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   7m  0s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | mapreduce tests |   9m  9s | Tests failed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 53s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |   6m 51s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   2m  0s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   0m 25s | Tests passed in 
hadoop-yarn-server-common. |
| {color:green}+1{color} | yarn tests |  55m 28s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | | 158m 38s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| FindBugs | module:hadoop-yarn-server-resourcemanager |
| Failed unit tests | hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755643/YARN-1651-8.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / 78ad04d |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/diffJavadocWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9111/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14742963#comment-14742963
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   3m 10s | YARN-1197 compilation may be 
broken. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 23 new or modified test files. |
| {color:green}+1{color} | javac |   8m 15s | There were no new javac warning 
messages. |
| {color:red}-1{color} | javadoc |  10m 17s | The applied patch generated  65  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 21s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  37m 18s | The patch has 177  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   7m  9s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   9m 25s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 53s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |   6m 58s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   2m  3s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   0m 26s | Tests passed in 
hadoop-yarn-server-common. |
| {color:green}+1{color} | yarn tests |  55m 46s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | | 146m 45s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| FindBugs | module:hadoop-yarn-server-resourcemanager |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755643/YARN-1651-8.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / 78ad04d |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/diffJavadocWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9110/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch, 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741371#comment-14741371
 ] 

Wangda Tan commented on YARN-1651:
--

Also rebased YARN-1197 branch to latest trunk.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch, YARN-1651-7.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740717#comment-14740717
 ] 

Jian He commented on YARN-1651:
---

few more comments:
- schedulerNode#increaseContainer is not invoked when increasing regular 
container ? add a test?
- remove ContainersAndNMTokensAllocation in SchedulerApplicationAttempt
- FiCaSchedulerApp#unreserve -> unreserveIncreasedContainer to avoid name 
confliction.
{code}
  public boolean unreserve(Priority priority,
  FiCaSchedulerNode node, RMContainer rmContainer) {
  {code}
 - revert RMWebServices, AssignmentInformation changes
 - MockAM#allocateChangeContainerRequests -> resizeContainers

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738568#comment-14738568
 ] 

Jian He commented on YARN-1651:
---

bq. I think we may need add such information to AMRMProtocol to make sure AM 
will be notified. For now, we can keep them as-is. Users can still get such 
information from RM logs.
I think for now we can fail the allocate call explicitly on those very clear 
situations in checkAndNormalizeContainerChangeRequest ?, e.g. the situation 
that rmContainer doesn't exist That's more explicit to users. Digging through 
logs is not an easy thing for application writer.

thanks for updating, Wangda ! some more comments focusing on decreasing code 
path.

- this may be not correct, because reserve event can happen on RESERVE state 
too, i.e. reReservation
{code}
  if (container.getState() != RMContainerState.NEW) {
container.hasIncreaseReservation = true;
  }
{code}
 - RMNodeImpl#toBeDecreasedContainers - no need to be a map, it can be a list ? 
and therefore NodeHeartBeatResponse and Impl change is not needed; similarly 
nmReportedIncreasedContainers can be a list.
 - When decreasing a container, should it send RMNodeDecreaseContainerEvent too 
?
 - revert ContainerManagerImpl change
 - Remove SchedulerApplicationAttempt#getIncreaseRequests
 - In AbstractYarnScheduler#deceraseContainers() move 
checkAndNormalizeContainerChangeRequests(decreaseRequests, false) to the same 
place as checkAndNormalizeContainerChangeRequests(increaseRequests, false) for 
consistency.
- this if condition is not needed.
{code}
  public boolean unreserve(Priority priority,
  FiCaSchedulerNode node, RMContainer rmContainer) {
if (rmContainer.hasIncreaseReservation()) {
  rmContainer.cancelIncreaseReservation();
}
{code}
 - looks like when decreasing reservedIncreasedContainer, it will unreserve the 
*whole* extra reserved resource, should it only unreserve the extra resources 
being decresed ?
 - In general, I think we should be able to decrease/increase a regular 
reserved container or a increasedReservedContainer ? 
- In ParentQueue, this null check is not needed.
{code}
  @Override
  public void decreaseContainer(Resource clusterResource,
  SchedContainerChangeRequest decreaseRequest,
  FiCaSchedulerApp app) {
if (app != null) {
{code}

- allocate call is specifically marked as noLock, but now every allocate call 
holds the global scheduler lock which is too expensive. we can move 
decreaseContainer to application itself.  
{code}   protected synchronized void decreaseContainer( {code}
It is also now holding queue Lock on allocate, which is also expensive, because 
that means a bunch of malicious AMs can effectively block queue's execuation.  

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-10 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738830#comment-14738830
 ] 

MENG DING commented on YARN-1651:
-

Hi, [~leftnoteasy]

One comment I forgot to post is that we may want to synchronize the 
RMContainerImpl.getAllocatedResource() call? Because the container resource may 
be updated at any time, e.g:
{code:title=RMContainerImpl.java}
   @Override
   public Resource getAllocatedResource() {
-return container.getResource();
+try {
+  readLock.lock();
+  return Resources.clone(container.getResource());
+} finally {
+  readLock.unlock();
+}
   }
{code}

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-10 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14739742#comment-14739742
 ] 

Wangda Tan commented on YARN-1651:
--

Thanks review! [~jianhe].
bq. I think for now we can fail the allocate call explicitly on those very 
clear situations in checkAndNormalizeContainerChangeRequest ?, e.g. the 
situation that rmContainer doesn't exist That's more explicit to users. Digging 
through logs is not an easy thing for application writer.
Done, now we will check it in both AMS/Scheduler, exception will be thrown in 
AMS. Doing both check because AMS doesn't acquire scheduler lock, so it is 
still possbile that RMContainer state changed when adding to scheduler.

bq. RMNodeImpl#toBeDecreasedContainers - no need to be a map, it can be a list 
? and therefore NodeHeartBeatResponse and Impl change is not needed; similarly 
nmReportedIncreasedContainers can be a list.
This is to avoid AM decrease same container multiple times between same NM 
heartbeats, this is a rare edge case. Similar for NM reports 
increasedContainers, if we decouple NM heartbeat and scheduler allocation, we 
could have container increased multiple times between scheduler looks at NM.

bq. When decreasing a container, should it send RMNodeDecreaseContainerEvent 
too ?
Done, added test to confirm this as well.

bq. looks like when decreasing reservedIncreasedContainer, it will unreserve 
the whole extra reserved resource, should it only unreserve the extra resources 
being decresed ?
Decrease container is decrease resource of a container to lower than confirmed 
resource. If a container is 2G, AM asks to increase to 4G, it can only decrease 
it to less than 2G before increase issued. So I think we need to unreserve the 
whole extra reserved resource.

bq. In general, I think we should be able to decrease/increase a regular 
reserved container or a increasedReservedContainer ?
Container reservation is an internal state of scheduler, AM doesn't know about 
the reserved container at know, so far I think we don't need to expose that to 
user.

bq. allocate call is specifically marked as noLock, but now every allocate call 
holds the global scheduler lock which is too expensive. we can move 
decreaseContainer to application itself.
DecreaseContainer is as same as completedContainer, both acquire scheduler lock 
and queue lock. I think we can optimize it in the future, which we can add them 
to something like "pendingReleased" list, and will be traversed periodically.
I added comments to CS#allocate to explain about this, the "NoLock" is not 100% 
acurate.

And addressed all other comments.

[~mding]
Comment addressed.


> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740114#comment-14740114
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 19s | Findbugs (version ) appears to 
be broken on YARN-1197. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 23 new or modified test files. |
| {color:red}-1{color} | javac |   8m 18s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |  11m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 30s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  43m 12s | The patch has 177  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 48s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 46s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   9m 13s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   9m 52s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   1m  0s | Tests passed in 
hadoop-sls. |
| {color:red}-1{color} | yarn tests |   6m 43s | Tests failed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   1m 59s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   0m 26s | Tests passed in 
hadoop-yarn-server-common. |
| {color:green}+1{color} | yarn tests |  55m 30s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | | 171m 51s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| Failed unit tests | hadoop.yarn.client.api.impl.TestYarnClient |
| Timed out tests | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755257/YARN-1651-6.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / f86eae1 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/diffJavacWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9079/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736779#comment-14736779
 ] 

Jian He commented on YARN-1651:
---

Thanks Wangda, some comments on my side:
- Remove ApplicationMasterService#checkDuplicatedIncreaseDecreaseRequest
- revert ClientRMService change
- AbstractYarnScheduler#checkAndNormalizeContainerChangeRequests
 -- attempt parameter is not used
 -- this will never be null ?
{code}
if (null == sr) {
  continue;
}
{code}
 --  pass exception as a parameter, instead of string concatenation ?
{code}
LOG.warn("Error happens when checking increase request:" + e
+ ". Ignoring 
this request");
{code}
 -- the invalid resize requests are simply ignored. How will the AM know that 
their resize requests are ignored and so lost ? Similar thing for the 
toBeRemovedRequests inside IncreaseContainerAllocator.
- updateIncreaseRequests: originalResource-> prevChangeRequest; similarly for 
the logging {{original capacity = }}
- use decreaseRequest#getSchedulerNode instead of 
csContext.getNode(decreaseRequest.getNodeId())
{code}
unreserveContainerIncreaseRequest(clusterResource, app,

csContext.getNode(decreaseRequest.getNodeId()), rmContainer);
csContext.getNode(decreaseRequest.getNodeId())

.decreaseContainer(decreaseRequest.getContainerId(), absDelta);
{code}
- remove SchedulerApplicationAttempt#hasIncreaseRequestForContainer
- newlyIncreasedContainers will not be null
{code}
List newlyIncreasedContainers =
  nm.pullNewlyIncreasedContainers();
  
if (null != newlyIncreasedContainers) {
{code}
- why is the decision made to give increase request highest priority ? I think 
we need to augment the ContainerResourceChangeRequest API to have the priority 
too so that it can be compared against regular resource requests?
{code}
/*
 * Try to allocate increase container first, and if we failed to allocate
 * 
anything, we will try to allocate regular container
 */
{code}

- SchedulerContainerResourceChangeRequest, this name is so long that it reduces 
code readability a lot as it’s used in so many places, how about 
SchedContainerChangeRequest ? or any other concise name you can think of ?
- It is true that node resource would change. But changing node resource is 
such a rare event that the probability of hitting this much lower than hitting 
it in the allocate call. I suggest we should also fail the allocate call in 
this case.
{code}
// The reason of doing check here instead of adding increase request
// to 
scheduler because node's resource could be updated after
// request added.
{code}
- why equal ?
{code}
if (null == request
|| reservedContainer.getContainerState() != 
ContainerState.RUNNING
|| 
Resources.equals(reservedContainer.getReservedResource(),

request.getDeltaCapacity())) {

{code}
- allocateIncreaseRequestFromReservedContainer- remove the unused parameter
- I think killing a container in this case is too harsh. Nothing is wrong with 
the container itself. The container can run fine. I think we should revert the 
increased resource and have some way to signal back that the increaseRequest 
failed, maybe in the allocate response ?
{code}
// When the container expired, and it has a pending increased request, we
// 
will kill the container.

new KillTransition().transition(container, event);
{code}
- I think the container.containerIncreasedAndAcquired boolean flag is not 
needed?
- unreserveContainerIncreaseRequest-> unreserveIncreasedContainer
- internalReleaseResource: remove unused application parameter
- fix the format 
{code}
 RMAppAttempt appAttempt =
rmContext
.getRMApps()
.get(
container.getId().getApplicationAttemptId()
.getApplicationId()).getCurrentAppAttempt();
{code}

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14737947#comment-14737947
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m 30s | Findbugs (version ) appears to 
be broken on YARN-1197. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 22 new or modified test files. |
| {color:red}-1{color} | javac |   8m  1s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m 18s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 29s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 40s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  34m 45s | The patch has 163  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   8m 14s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   9m 18s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 53s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |   6m 53s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   2m  2s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |   0m 25s | Tests passed in 
hadoop-yarn-server-common. |
| {color:green}+1{color} | yarn tests |   8m 21s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| {color:red}-1{color} | yarn tests |  63m  9s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 176m 17s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754992/YARN-1651-5.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / f86eae1 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/diffJavacWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9069/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
>   

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-09 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14737174#comment-14737174
 ] 

MENG DING commented on YARN-1651:
-

bq. I think killing a container in this case is too harsh. Nothing is wrong 
with the container itself. The container can run fine. I think we should revert 
the increased resource and have some way to signal back that the 
increaseRequest failed, maybe in the allocate response ?

Hi, [~jianhe], a sub-task (YARN-4138) has been logged for this issue. Based on 
our current discussion, when an increase token expires, RM will revert the 
increased resource back and also send a decrease event to NM (NM can ignore the 
decrease event if target size of container is equal to existing container 
size). As a bonus, the decrease can be pulled by AM in 
AllocateResponse.decreased_containers which acts as a signal for the expired 
increase action. 

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-08 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735676#comment-14735676
 ] 

MENG DING commented on YARN-1651:
-

Hi, [~leftnoteasy]
bq. I agree the general idea, and we should do the similar thing. However, I'm 
not sure caching in RM is a good idea, potentially a malicious AM can send 
millions of unknown-to-be-decreased-containers to RM when RM started. Maybe 
it's better to cache in AMRMClient side. I think we can do this in a separated 
JIRA? Could you file a new JIRA for this if you agree?

Your proposal makes sense. I will file a JIRA for this.

Thanks for addressing my comments. I don't have more comments for now. As per 
our discussion, I will come up with an end-to-end test based on 
distributedshell, and post onto this JIRA for review.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735859#comment-14735859
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  18m  2s | Findbugs (version ) appears to 
be broken on YARN-1197. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 20 new or modified test files. |
| {color:red}-1{color} | javac |   8m 10s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m 17s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 55s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  31m  2s | The patch has 163  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 29s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   9m 26s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 53s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |   6m 58s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   0m 26s | Tests passed in 
hadoop-yarn-server-common. |
| {color:red}-1{color} | yarn tests |  59m 24s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 154m 43s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754736/YARN-1651-4.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / f86eae1 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/diffJavacWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9045/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-08 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14735070#comment-14735070
 ] 

MENG DING commented on YARN-1651:
-

Hi, [~leftnoteasy]

I am ok with most of the reply comments. Thanks.

bq. It seems no need to do the null check here. When it becomes null? I prefer 
to keep it as-is and it will throw NPE if any fatal issue happens.
The {{updateContainerAndNMToken}} may return null:
{code}
  Container updatedContainer =
  updateContainerAndNMToken(rmContainer, false, increase);
  returnContainerList.add(updatedContainer);
{code}

I only mention this because {{pullNewlyAllocatedContainers()}} has a check for 
null for the same logic, so I think we may want to make it consistent?

Some remaining comments:
* As you mentioned in the code, currently reserved resource increase request 
does not participate in the continuous reservation looking logic. So, based on 
my understanding, if an application has reserved some resource for a container 
resource increase request on a node, that amount of resource should never be 
unreserved in order for the application to allocate a regular container on some 
other node. But that doesn't seem to be the case right now? Can you confirm?
If so, I am thinking a simple solution would be to *exclude* resources reserved 
for increased containers when trying to find an unreserved container for 
regular container allocation.
{code:title=RegularContainerAllocator.assignContainer}
  ...
  ...
  unreservedContainer =
  application.findNodeToUnreserve(clusterResource, node, priority,  
<= Don't consider resources reserved for container increase request
  resourceNeedToUnReserve);
  ...
{code}
* I think it will be desirable to implement a {{pendingDecrease}} set in 
{{SchedulerApplicationAttempt}}, and corresponding logic, just like 
{{SchedulerApplicationAttempt.pendingRelease}}. This is to guard against the 
situation *when decrease requests are received while RM is in the middle of 
recovery, and has not received all container statuses from NM yet*.

* Some nits
** Comments in {{NMReportedContainerChangeIsDoneTransition}} doesn't seem right.
** IncreaseContainerAllocator: {{LOG.debug("  Headroom is satisifed, 
skip..");}} --> {{LOG.debug("  Headroom is not satisfied, skip..");}}

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731505#comment-14731505
 ] 

Wangda Tan commented on YARN-1651:
--

Hi Meng,
Thanks for comments:

bq. We probably need to address this properly in the JIRA that tracks container 
resource increase roll back. (I think Container resource increase expiration 
should be tracked as a Scheduler Event, e.g., 
SchedulerEventType.CONTAINER_INCREASE_EXPIRE)
I think we can do that either via adding CONTAINER_INCREASE_EXPIRE or directly 
call decrease of scheduler from RMContainer, I'm not sure which one is better, 
let's figure it out when doing it.

bq. It seems that this function throws exception whenever there is a duplicated 
id. Shall we handle the case where if there are both increase and decrease 
requests for the same id, we can ignore the increase but keep the decrease 
request?
I have thought about this before, I think it's hard to decide when two request 
with same containerId but different target resource exists, which one will be 
chosen. And it's not like an expected allocate request as well. So I prefer to 
reject both.

bq. Will it be better to combine all sanity checks into one function
Done

bq. For validateIncreaseDecreaseRequest, we don't check minimum allocation now, 
is it intended?
Yes it's intended, because we will normalize it later, so no need to throw 
exception.

bq. This function is used by both pullNewlyIncreasedContainers(), and 
pullNewlyDecreasedContainers(). Why do we need to call 
updateContainerAndNMToken for decreased containers? It also unnecessarily send 
a ACQUIRE_UPDATED_CONTAINER event for every decreased container?
This is majorly makes logic correct and consistent, we don't use the container 
token for now, but I think we should make it updated before return to app. 
Unless we have any performance issue of doing this, I prefer to keep existing 
behavior.

bq. We should probably check null before adding updatedContainer?
It seems no need to do the null check here. When it becomes null? I prefer to 
keep it as-is and it will throw NPE if any fatal issue happens.

bq. RMNodeImpl.pullNewlyIncreasedContainers()
Implemented.

bq. AppSchedulingInfo#notifyContainerStopped not being used.
Removed, we handled this in LeafQueue#completedContainer.

bq. I think the following is a typo, should be if (cannotAllocateAnything), 
right?
Correct, fixed.

bq. Not sure if I understand the logic. Why only break when 
node.getReservedContainer() == null? Shouldn't we break out of the loop here no 
matter what?
Nice catch! I fixed this, we should break when we allocated or reserved 
anything.

bq. I think earlier in the allocateIncreaseRequest() function, if a new 
increase is successfully allocated, 
application.increaseContainer(increaseRequest) will have removed the increase 
request already?
Another nice catch! Yes we should already handled it in 
application.increaseContainer

bq. RMContainerImpl...Shouldn't it be changed to...
Yes, it should do as you said, updated.

bq. Also, is container.containerIncreased really needed?
It's needed when we don't know if a acquired event is for an increasedContainer 
or decreasedContainer, added isIncreaseContainer to acquire event (now it's 
RMContainerUpdatesAcquiredEvent). And removed 
RMContainerImpl.containerIncreased.

Typos: fixed.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, YARN-1651-2.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731696#comment-14731696
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 31s | Findbugs (version ) appears to 
be broken on YARN-1197. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 20 new or modified test files. |
| {color:red}-1{color} | javac |   7m 57s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m  5s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 52s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  30m  2s | The patch has 162  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 25s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   9m 16s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 51s | Tests passed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |   6m 59s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   0m 24s | Tests passed in 
hadoop-yarn-server-common. |
| {color:red}-1{color} | yarn tests |  58m 15s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 151m 13s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754278/YARN-1651-3.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / f86eae1 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/diffJavacWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9013/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-03 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14729706#comment-14729706
 ] 

MENG DING commented on YARN-1651:
-

Hi, [~leftnoteasy]

I think it is fine to reuse Expire for now for container increase expiration. 
We probably need to address this properly in the JIRA that tracks container 
resource increase roll back. (I think Container resource increase expiration 
should be tracked as a Scheduler Event, e.g., 
SchedulerEventType.CONTAINER_INCREASE_EXPIRE)

I have a few more comments or questions regarding the patch:

* Regarding sanity checks:
** The following functions can be removed? 
{{ApplicationMaster.checkDuplicatedIncreaseDecreaseRequest()}}
** About {{RMServerUtils.checkDuplicatedIncreaseDecreaseRequest()}}. It seems 
that this function throws exception whenever there is a duplicated id. Shall we 
handle the case where if there are both increase and decrease requests for the 
same id, we can ignore the increase but keep the decrease request?
** Will it be better to combine all sanity checks into one function, e.g., 
{{validateIncreaseDecreaseRequest(List 
incRequests, List decRequests)}}, such that it 
will check both duplicated IDs, and the resource validity for increase and 
decrease requests? 
** For {{validateIncreaseDecreaseRequest}}, we don't check minimum allocation 
now, is it intended? I see that later on you normalize the request so that it 
will be at least minimum allocation. Just want to confirm.

* For {{SchedulerApplicationAttempt.pullNewlyUpdatedContainers}}. 
** This function is used by both pullNewlyIncreasedContainers(), and 
pullNewlyDecreasedContainers(). Why do we need to call 
{{updateContainerAndNMToken}} for decreased containers?  It also unnecessarily 
send a ACQUIRE_UPDATED_CONTAINER event for every decreased container?
** We should probably check null before adding updatedContainer?
{code:title=pullNewlyUpdatedContainers}
  Container updatedContainer = updateContainerAndNMToken(rmContainer, 
false);
  returnContainerList.add(updatedContainer);
{code}

* It seems {{RMNodeImpl.pullNewlyIncreasedContainers()}} is empty?

* The following function doesn't seem to be used?
{code:title=AppSchedulingInfo}
  public synchronized void notifyContainerStopped(RMContainer rmContainer) {
// remove from pending increase request map if it exists
removeIncreaseRequest(rmContainer.getAllocatedNode(),
rmContainer.getAllocatedPriority(), rmContainer.getContainerId());
  }
{code}

* In {{IncreaseContainerAllocator.assignContainers}}:
** I think the following is a typo, should be {{if (cannotAllocateAnything)}}, 
right?
{code}
  if (shouldUnreserve) {
LOG.debug("We cannot allocate anything because of low headroom, "
+ "headroom=" + resourceLimits.getHeadroom());
  }
{code}
** Not sure if I understand the logic. Why only break when 
node.getReservedContainer() == null? Shouldn't we break out of the loop here no 
matter what?
{code}
   while (iter.hasNext()) {
  ...
  ...
  // Try to allocate the increase request
  assigned = allocateIncreaseRequest(node, increaseRequest);
  if (node.getReservedContainer() == null) {
// if it's not a reserved increase request, we will record
// priority/containerId so that we can remove the request later
increasedContainerPriority = priority;
increasedContainerId = rmContainer.getContainerId();
break;
  }
   }  
{code}
** Is the following needed? 
 {code}
  if (increasedContainerId != null) {
// If we increased (not reserved) a new increase request, we should
// remove it from request map.
application.removeIncreaseRequest(nodeId, increasedContainerPriority,
increasedContainerId);
  }
{code}
I think earlier in the {{allocateIncreaseRequest()}} function, if a new 
increase is successfully allocated, 
{{application.increaseContainer(increaseRequest)}} will have removed the 
increase request already?
* In {{RMContainerImpl.java}}
IIUC, {{containerIncreased}} indicates that a increase is done in scheduler, 
and {{containerIncreasedAndAcquired}} indicates that a increase has been 
acquired by AM. 
If so, then in {{NMReportedContainerChangeIsDoneTransition}}
{code}
public void transition(RMContainerImpl container, RMContainerEvent event) {
  if (container.containerIncreased) {
// If container is increased but not acquired by AM, we will start
// containerAllocationExpirer for this container in this transition.
container.containerAllocationExpirer.unregister(event.getContainerId());
container.containerIncreasedAndAcquired = false;
  }
}
{code}
Shouldn't it be changed to:
{code}
public void transition(RMContainerImpl container, RMContainerEvent event) {
  if 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-02 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727683#comment-14727683
 ] 

MENG DING commented on YARN-1651:
-

I think that should work. 
We probably need to make sure to properly log warning messages so that user 
will find out what is going on and why under these circumstances. I know that 
both of my examples are edge cases (mostly through incorrect logic in AM), but 
it could be frustrating to the user when they think their container has been 
granted certain resource in NM, only to find out at a later time that it is not.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727686#comment-14727686
 ] 

Wangda Tan commented on YARN-1651:
--

[~mding], that's make sense, maybe we need to put these roll back containers to 
next AM-RM heartbeat.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727613#comment-14727613
 ] 

Wangda Tan commented on YARN-1651:
--

[~mding], I think for your last example, if RM decides to rollback resources, 
it should send a container decrease event to NM as well, NM can ignore the 
decrease event if target size of container is equals to existing container 
size. Sounds like a plan? 

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728495#comment-14728495
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 32s | Findbugs (version ) appears to 
be broken on YARN-1197. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 20 new or modified test files. |
| {color:red}-1{color} | javac |   8m  1s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 47s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  29m 39s | The patch has 159  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 30s | The patch appears to introduce 7 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   9m 19s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:red}-1{color} | tools/hadoop tests |   0m 27s | Tests failed in 
hadoop-sls. |
| {color:green}+1{color} | yarn tests |   6m 58s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   0m 25s | Tests passed in 
hadoop-yarn-server-common. |
| {color:red}-1{color} | yarn tests |  57m 14s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 149m 40s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| Failed unit tests | hadoop.yarn.sls.TestSLSRunner |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753880/YARN-1651-2.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / f35a945 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/diffJavacWarnings.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8991/console |


This message was automatically generated.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, YARN-1651-2.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-02 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727355#comment-14727355
 ] 

MENG DING commented on YARN-1651:
-

bq. Do you have any use case in your mind that AM should know about when/if 
container is decreased by NM?
You are right that there doesn't seem to be a compelling use case where AM 
needs to know if container resource has decreased or not in NM. We should keep 
it as is.

bq. I think we only need to remember latest confirmed capacity (including 
decreased, confirmed from NM). In your example, we only need to remember 4G, 
token2 will overwrite token1 and expiration timout will be reset when second 
increase request approved.

So, IIUC, if we only remember last confirmed capacity, using my example, the 
sequence of events will be:

1. Initially a container uses 4GB
2. The first increase request increases it to 6GB (token1)
3. The second increase request increases it to 8GB (token2). The rollback 
resource is still 4GB, and expiration timeout will be reset in RM.
4. AM uses the first token to increase container resource to 6GB in NM
5. NM reports 6GB to RM during heartbeat, and RM will do nothing because of the 
following:
{code}
else if (Resources.fitsIn(nmContainerResource, rmContainerResource)) {
  // when rmContainerResource >= nmContainerResource, we won't do anything,
  // it is possible a container increased is issued by RM, but AM hasn't
  // told NM.
}
{code}
6. token2 is never used by AM, and is expired.
7. Expirer now roll back container resource to 4GB in RM.

Now RM and NM has inconsistent view of resource allocation ... Did I miss 
anything?

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725861#comment-14725861
 ] 

Wangda Tan commented on YARN-1651:
--

Removed WIP patch since it is very close to .1 patch and may cause people look 
at wrong version.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725859#comment-14725859
 ] 

Wangda Tan commented on YARN-1651:
--

Hi [~mding],
Thanks for reviewing patch, it's very helpful!

*1)*
Regarding to:
bq. Recall during the design discussion, we agreed that as long as an increase 
has not yet completed for a container, we should not process any other 
increase/decrease requests for the same container. It seems that this patch 
will still process decrease/increase requests even an increase action is 
ongoing?

For the example 1, I think it's not very confusing. IIRC, we have discussed 
this before. We should track latest container increase/decrease status in 
AMRMClient. If AM send increase token to NM after it asks decrease the same 
container, it's AM's fault.

For example 2, actually this is handled by existing code: 
{{AbstractYarnScheduler#containerIncreasedOnNode}}
{code}
else if (Resources.fitsIn(nmContainerResource, rmContainerResource)) {
  // when rmContainerResource >= nmContainerResource, we won't do anything,
  // it is possible a container increased is issued by RM, but AM hasn't
  // told NM.
}
{code}
I think it's fine to me. Killing the container after increase expires is just a 
temp solution. I did this only because existing patch is too big, I don't want 
to add more confusing to it :). 

We should fix it as soon as we can, and it should be in the same release of 
YARN-1197. Do you think does it still confuse you if we fix the killing 
container after increase expires issue?

*2)*
bq. For the following, I was wondering why not handling duplicate containers in 
ApplicationMasterService? This way, the list passed to CapacityScheduler will 
be unique, and we can avoid unnecessary insert/remove?
Good suggestion, will fix.

*3)*
bq. In RMContainerImpl.java, I think reusing the ACQUIRED, and EXPIRE events 
for container increase could be a little confusing? Can we make it very clear 
by using a separate event, like INCREASE_ACQUIRED
Agree, will fix.

*4)*
bq. For the following, I think the writeLock is not needed, as the transition 
is already guarded with writeLock in handle(RMNodeEvent event)
You're correct, I'm under wrong impression that handle uses synchronized lock, 
will fix.

*5)*
bq. How about renaming CombinedContainerAllocator to GeneralContainerAllocator, 
and renaming RegularContainerAllocator to NewContainerAllocator?
I disucssed with [~jianhe] about this long time ago. NewContainerAllocator was 
used in early patches of YARN-3983. Actually NewContainerAllocator is same 
confusing, it looks like some "OldContainerAllocator" exists. 
GeneralContainerAllocator cannot reflect it's the parent container allocator 
for "regular-" and "increase-". Frankly I don't like existing naming as well, 
but I don't get a better name. 

Please let me know if your have any other thoughts.


> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-01 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726075#comment-14726075
 ] 

MENG DING commented on YARN-1651:
-

Thanks [~leftnoteasy] for the explanation.

bq. For the example 1, I think it's not very confusing. IIRC, we have discussed 
this before. We should track latest container increase/decrease status in 
AMRMClient. If AM send increase token to NM after it asks decrease the same 
container, it's AM's fault
This makes sense to me now.

bq. For example 2, actually this is handled by existing code: 
AbstractYarnScheduler#containerIncreasedOnNode. Killing the container after 
increase expires is just a temp solution. I did this only because existing 
patch is too big, I don't want to add more confusing to it . 
bq. We should fix it as soon as we can, and it should be in the same release of 
YARN-1197. Do you think does it still confuse you if we fix the killing 
container after increase expires issue?
This should be fine as long as we handle resource rollback correctly for 
multiple increase requests expiration. For example:

1. Initially a container uses 4GB
2. The first increase request increases it to 6GB (token1)
3. The second increase request increases it to 8GB (token2)

If only token2 is used, expiration of token1 will be ignored
If token1 is used, but token2 is expired, scheduler should be able to roll back 
to 6GB.
If both token1 and token2 expire, scheduler must be able to roll back to 4GB.

In other words, the increase expiration logic must remember all ongoing 
increase requests for a container, not just the latest one.

bq.  Frankly I don't like existing naming as well, but I don't get a better 
name.
I think it is the name of {{CombinedContainerAllocator}} that is most confusing 
to me :-). How about just naming {{CombinedContainerAllocator}} as 
{{ContainerAllocator}}, and then renaming the current {{ContainerAllocator}} as 
{{AbstractContainerAllocator}}? Anyway, maybe other people will have better 
suggestions.

I also have another suggestion regarding {{newlyDecreasedContainers}}. Right 
now, as soon as the scheduler decrease the container resource, it will add the 
container to this list, which will be pulled by AM in the next heartbeat. I 
think this doesn't have too much value. I was wondering, if it can be enhanced 
such that the list will only be updated when the decreased containers list has 
been sent to NM? Implementation wise, it can be similar to the idea of 
RMAppAttemptImpl.justFinishedContainers, and 
RMAppAttemptImpl.finishedContainersSentToAM. Basically RMNodeImpl keeps the 
containers from the *toBeDecreasedContainers* list in the memory for one more 
heartbeat cycle, and once RM receives the next heartbeat request from NM, it 
implies that the toBeDecreasedContainers from last heartbeat response has been 
successfully received by NM. The benefit of this approach is that once AM 
receives the decreased list, it knows that the decrease has been received by 
NM. Maybe we can take a step further to make the decrease action synchronous in 
NodeStatusUpdaterImpl (involves one line code change, and we already do 
blocking increase on NM), then once AM receives the decreased list, it knows 
that decrease has been completed in NM, eliminating the needs for status 
polling. Thoughts?

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726303#comment-14726303
 ] 

Wangda Tan commented on YARN-1651:
--

bq. In other words, the increase expiration logic must remember all ongoing 
increase requests for a container, not just the latest one.
I think we only need to remember *latest confirmed capacity (including 
decreased, confirmed from NM)*. In your example, we only need to remember 4G, 
token2 will overwrite token1 and expiration timout will be reset when second 
increase request approved.

bq. renaming the current ContainerAllocator as AbstractContainerAllocator? 
Anyway, maybe other people will have better suggestions.
This seems better than existing, thoughts? [~jianhe]

bq. I also have another suggestion regarding newlyDecreasedContainers...
This suggestion changes semantics of decreasedContainers in AllocateResponse. 
Currently, both of increasedContainers and decreasedContainers are all added to 
AllocateResponse when approved by RM.
I think it may confuse people if AM waits few cycle to get the 
decreasedContainers. In addition, I cannot see a big value here: AM decreases 
containers just to release resources, it shouldn't care when the resource 
becomes available to YARN, it should decrease resource usage before sending the 
decrease container request to RM. Do you have any use case in your mind that AM 
should know about when/if container is decreased by NM?

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-01 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725644#comment-14725644
 ] 

MENG DING commented on YARN-1651:
-

Some additional questions:

1. For the following, I was wondering why not handling duplicate containers in 
ApplicationMasterService? This way, the list passed to CapacityScheduler will 
be unique, and we can avoid unnecessary insert/remove?
{code:title=CapacityScheduler#allocate}
+  // Decrease containers, put it after processing increase container 
request
+  // because it's possible AM ask increase/decrease same container in a 
same
+  // heartbeat.
+  decreaseContainers(decreaseRequests, application);
{code}

2. In RMContainerImpl.java, I think reusing the ACQUIRED, and EXPIRE events for 
container increase could be a little confusing? Can we make it very clear by 
using a separate event, like INCREASE_ACQUIRED, INCREASE_EXPIRE? Also, for 
NM_DONE_CHANGE_RESOURCE, how about using RESOURCE_CHANGED_IN_NM, or 
RESOURCE_INCREASED_IN_NM?

3. For the following, will it be completed in the 2.8 time frame? I think the 
penalty is a little too harsh, especially for long running services to be 
killed because of this.
{code}
+// TODO, we can do better for this: roll back container resource to the
+// resource before increase, and notify scheduler about this decrease 
as
+// well. Will do that in a separated JIRA.
+container.containerIncreasedAndAcquired = false;
{code}

4. For the following, I think the writeLock is not needed, as the transition is 
already guarded with writeLock in handle(RMNodeEvent event)
{code:title=RMNodeImpl}
+  public static class DecreaseContainersTransition
+  implements SingleArcTransition {
+ 
+@Override
+public void transition(RMNodeImpl rmNode, RMNodeEvent event) {
+  try {
+rmNode.writeLock.lock();
+RMNodeDecreaseContainerEvent de = (RMNodeDecreaseContainerEvent) event;
+
+for (Container c : de.getToBeDecreasedContainers()) {
+  rmNode.toBeDecreasedContainers.put(c.getId(), c);
+}
+  } finally {
+rmNode.writeLock.unlock();
+  }
+}
+  }
{code}

5. How about renaming {{CombinedContainerAllocator}} to 
{{GeneralContainerAllocator}}, and renaming {{RegularContainerAllocator}} to 
{{NewContainerAllocator}}?

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-WIP.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-01 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725533#comment-14725533
 ] 

MENG DING commented on YARN-1651:
-

Hi, [~leftnoteasy], thanks so much for posting the patch.  

I do have one question regarding the patch. Recall during the design 
discussion, we agreed that as long as an increase has not yet completed for a 
container, we should not process any other increase/decrease requests for the 
same container. It seems that this patch will still process decrease/increase 
requests even an increase action is ongoing? 

If the following sequence of events happen:
Example 1:
1. AM sends container increase request to RM
2. RM allocates the resource and gives out increase token to AM
3. AM sends decrease request to RM for the same container
4. AM uses the increase token to increase resource on NM
5. NM reports container status back to RM

IIUC, at step 3, this patch will decrease the container size, and remove the 
container from allocation expirer. At step 5, this patch will see that the RM 
container size is smaller than the reported NM container size, and will tell NM 
to decrease the container resource. The concern I have with this approach is 
that in step 4, the user will think that the increase is successfully done in 
NM, but in fact it won't. 

Also, what will happen in the following sequence of events?
Example 2:
1. AM sends container increase request to RM
2. RM allocates the resource and gives out increase token (token1) to AM
3. AM sends a new container increase request for the same container to RM with 
more resource
4. RM allocates the resource and gives out increase token (token2) to AM
5. AM uses token1 (the one with smaller size) to increase resource on NM, but 
not token2

IIUC, when RM receives the increase report from NM, it will find out that the 
RM container size is larger than the reported NM container size, and do nothing 
about it, later on when token2 expires, the entire container will be killed 
according to the current implementation. I think this behavior could be quite 
confusing to the user.

IMHO, at least for the case in example 2, we should delay processing of the 
second increase request until the first increase action is completed.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-WIP.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724606#comment-14724606
 ] 

Hadoop QA commented on YARN-1651:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  21m 15s | Findbugs (version ) appears to 
be broken on YARN-1197. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 20 new or modified test files. |
| {color:red}-1{color} | javac |   9m  5s | The applied patch generated  1  
additional warning messages. |
| {color:red}-1{color} | javadoc |  12m  3s | The applied patch generated  2  
additional warning messages. |
| {color:red}-1{color} | release audit |   0m 22s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m  3s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |  26m 37s | The patch has 151  line(s) 
that end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   2m  3s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   6m 47s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |  10m 17s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | tools/hadoop tests |   0m 57s | Tests passed in 
hadoop-sls. |
| {color:red}-1{color} | yarn tests |   7m 26s | Tests failed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   0m 26s | Tests passed in 
hadoop-yarn-server-common. |
| {color:red}-1{color} | yarn tests |  49m 36s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 149m 45s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-common |
| FindBugs | module:hadoop-yarn-server-resourcemanager |
| Failed unit tests | hadoop.yarn.client.TestApplicationClientProtocolOnHA |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
|   | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates |
|   | hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | hadoop.yarn.server.resourcemanager.TestResourceManager |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebAppFairScheduler |
|   | 
hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions |
|   | hadoop.yarn.server.resourcemanager.webapp.dao.TestFairSchedulerQueueInfo |
|   | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCResponseId |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.security.TestAMRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753408/YARN-1651-1.YARN-1197.patch
 |
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | YARN-1197 / f35a945 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/diffJavacWarnings.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/diffJavadocWarnings.txt
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-common.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-sls test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/testrun_hadoop-sls.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8955/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-server-common test log | 

[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-08-31 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723493#comment-14723493
 ] 

MENG DING commented on YARN-1651:
-

[~leftnoteasy], I just realized that there is one issue not discussed in the 
protocol design regarding the {{ContainerResourceChangeRequestProto}} that may 
affect the scheduler:

{code}
message ContainerResourceChangeRequestProto {
  optional ContainerIdProto container_id = 1;
  optional ResourceProto capability = 2;
} 
{code}

Shall we add a priority field to {{ContainerResourceChangeRequestProto}}? 
Without a priority field, how does the scheduler decide the priority between 
increase/decrease request and new allocation request within the same 
application? Does it simply assume that the increase/decrease request has the 
highest priority within the same application? If so, that may not be the 
correct thing to do ... What do you think?

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-08-31 Thread MENG DING (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723572#comment-14723572
 ] 

MENG DING commented on YARN-1651:
-

Correction, the concern is only for resource increase request. The decrease 
request should be irrelevant in this context.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-08-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723608#comment-14723608
 ] 

Wangda Tan commented on YARN-1651:
--

Hi [~mding],
I think for now, we can assume increase has higher priority. We can add 
priority if people think it's important.

Attaching WIP patch, I think most of functionalities are completed, there're 
few pending items need to add some tests, and some part of code should be 
polished. The patch assumes increase request has higher priority than regular 
request.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-WIP.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)