[jira] [Commented] (YARN-2513) Host framework UIs in YARN for use with the ATS

2014-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132565#comment-14132565
 ] 

Hadoop QA commented on YARN-2513:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668429/YARN-2513-v1.patch
  against trunk revision 98588cf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4952//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4952//console

This message is automatically generated.

 Host framework UIs in YARN for use with the ATS
 ---

 Key: YARN-2513
 URL: https://issues.apache.org/jira/browse/YARN-2513
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Attachments: YARN-2513-v1.patch


 Allow for pluggable UIs as described by TEZ-8. Yarn can provide the 
 infrastructure to host java script and possible java UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2523) ResourceManager UI showing negative value for Decommissioned Nodes field

2014-09-13 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-2523:
-
Attachment: YARN-2523.patch

uploaded patch to fix this issue. 
Test details : 
1. Recured using test, and applied patch.
2. Removed duplicate assertion line.

 ResourceManager UI showing negative value for Decommissioned Nodes field
 --

 Key: YARN-2523
 URL: https://issues.apache.org/jira/browse/YARN-2523
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 3.0.0
Reporter: Nishan Shetty
Assignee: Rohith
 Attachments: YARN-2523.patch


 1. Decommission one NodeManager by configuring ip in excludehost file
 2. Remove ip from excludehost file
 3. Execute -refreshNodes command and restart Decommissioned NodeManager
 Observe that in RM UI negative value for Decommissioned Nodes field is shown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2523) ResourceManager UI showing negative value for Decommissioned Nodes field

2014-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132635#comment-14132635
 ] 

Hadoop QA commented on YARN-2523:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668568/YARN-2523.patch
  against trunk revision 98588cf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4953//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4953//console

This message is automatically generated.

 ResourceManager UI showing negative value for Decommissioned Nodes field
 --

 Key: YARN-2523
 URL: https://issues.apache.org/jira/browse/YARN-2523
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 3.0.0
Reporter: Nishan Shetty
Assignee: Rohith
 Attachments: YARN-2523.patch


 1. Decommission one NodeManager by configuring ip in excludehost file
 2. Remove ip from excludehost file
 3. Execute -refreshNodes command and restart Decommissioned NodeManager
 Observe that in RM UI negative value for Decommissioned Nodes field is shown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2547) Cross Origin Filter throws UnsupportedOperationException upon destroy

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132645#comment-14132645
 ] 

Hudson commented on YARN-2547:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #679 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/679/])
YARN-2547. Cross Origin Filter throws UnsupportedOperationException upon 
destroy (Mit Desai via jeagles) (jeagles: rev 
54e5794806bd856da0277510efe63656eed23146)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestCrossOriginFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/CrossOriginFilter.java


 Cross Origin Filter throws UnsupportedOperationException upon destroy
 -

 Key: YARN-2547
 URL: https://issues.apache.org/jira/browse/YARN-2547
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Fix For: 2.6.0

 Attachments: YARN-2547.patch, YARN-2547.patch, YARN-2547.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2456) Possible livelock in CapacityScheduler when RM is recovering apps

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132646#comment-14132646
 ] 

Hudson commented on YARN-2456:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #679 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/679/])
YARN-2456. Possible livelock in CapacityScheduler when RM is recovering (xgong: 
rev e65ae575a059a426c4c38fdabe22a31eabbb349e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* hadoop-yarn-project/CHANGES.txt


 Possible livelock in CapacityScheduler when RM is recovering apps
 -

 Key: YARN-2456
 URL: https://issues.apache.org/jira/browse/YARN-2456
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.6.0

 Attachments: YARN-2456.1.patch, YARN-2456.2.patch


 Consider this scenario:
 1. RM is configured with a single queue and only one application can be 
 active at a time.
 2. Submit App1 which uses up the queue's whole capacity
 3. Submit App2 which remains pending.
 4. Restart RM.
 5. App2 is recovered before App1, so App2 is added to the activeApplications 
 list. Now App1 remains pending (because of max-active-app limit)
 6. All containers of App1 are now recovered when NM registers, and use up the 
 whole queue capacity again.
 7. Since the queue is full, App2 cannot proceed to allocate AM container.
 8. In the meanwhile, App1 cannot proceed to become active because of the 
 max-active-app limit 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2525) yarn logs command gives error on trunk

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132649#comment-14132649
 ] 

Hudson commented on YARN-2525:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #679 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/679/])
YARN-2525. yarn logs command gives error on trunk (Akira AJISAKA via aw) (aw: 
rev 40364dc47c03efa295ae03fe8aa8467017fb6f26)
* hadoop-yarn-project/hadoop-yarn/bin/yarn
* hadoop-yarn-project/CHANGES.txt


 yarn logs command gives error on trunk
 --

 Key: YARN-2525
 URL: https://issues.apache.org/jira/browse/YARN-2525
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scripts
Reporter: Prakash Ramachandran
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: YARN-2525.patch


 yarn logs command (trunk branch) gives an error
 Error: Could not find or load main class 
 org.apache.hadoop.yarn.logaggregation.LogDumper
 instead the class should be org.apache.hadoop.yarn.client.cli.LogsCLI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2484) FileSystemRMStateStore#readFile/writeFile should close FSData(In|Out)putStream in final block

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132661#comment-14132661
 ] 

Hudson commented on YARN-2484:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #679 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/679/])
YARN-2484. FileSystemRMStateStore#readFile/writeFile should close 
FSData(In|Out)putStream in final block. Contributed by Tsuyoshi OZAWA (jlowe: 
rev 78b048393a80a9bd1399d08525590bb211a32d8c)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java


 FileSystemRMStateStore#readFile/writeFile should close 
 FSData(In|Out)putStream in final block
 -

 Key: YARN-2484
 URL: https://issues.apache.org/jira/browse/YARN-2484
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Trivial
 Fix For: 2.6.0

 Attachments: YARN-2484.1.patch, YARN-2484.2.patch


 File descriptors can leak if exceptions are thrown in these methods.
 {code}
  private byte[] readFile(Path inputPath, long len) throws Exception {
 FSDataInputStream fsIn = fs.open(inputPath);
 // state data will not be that long
 byte[] data = new byte[(int)len];
 fsIn.readFully(data);
 fsIn.close();
 return data;
   }
 {code}
 {code}
   private void writeFile(Path outputPath, byte[] data) throws Exception {
 Path tempPath =
 new Path(outputPath.getParent(), outputPath.getName() + .tmp);
 FSDataOutputStream fsOut = null;
 // This file will be overwritten when app/attempt finishes for saving the
 // final status.
 fsOut = fs.create(tempPath, true);
 fsOut.write(data);
 fsOut.close();
 fs.rename(tempPath, outputPath);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2229) ContainerId can overflow with RM restart

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132660#comment-14132660
 ] 

Hudson commented on YARN-2229:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #679 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/679/])
YARN-2229. Changed the integer field of ContainerId to be long type. 
Contributed by Tsuyoshi OZAWA (jianhe: rev 
3122daa80261b466e309e88d88d1e2c030525e3f)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingRMRestart.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/preemption/CheckpointAMPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/Epoch.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/NullRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/EpochPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestMaxRunningAppsEnforcer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java


 ContainerId can overflow with RM restart
 

 Key: YARN-2229
 URL: https://issues.apache.org/jira/browse/YARN-2229
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 2.6.0

 Attachments: YARN-2229.1.patch, 

[jira] [Commented] (YARN-2528) Cross Origin Filter Http response split vulnerability protection rejects valid origins

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132654#comment-14132654
 ] 

Hudson commented on YARN-2528:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #679 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/679/])
YARN-2528. Relaxed http response split vulnerability protection for the origins 
header and made it accept multiple origins in CrossOriginFilter. Contributed by 
Jonathan Eagles. (zjshen: rev 98588cf044d9908ecf767257c09a52cf17aa2ec2)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestCrossOriginFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/CrossOriginFilter.java


 Cross Origin Filter Http response split vulnerability protection rejects 
 valid origins
 --

 Key: YARN-2528
 URL: https://issues.apache.org/jira/browse/YARN-2528
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Fix For: 2.6.0

 Attachments: YARN-2528-v1.patch, YARN-2528-v2.patch


 URLEncoding is too strong of a protection for HTTP Response Split 
 Vulnerability protection and major browser reject the encoded Origin. An 
 adequate protection is simply to remove all CRs LFs as in the case of PHP's 
 header function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2542) yarn application -status appId throws NPE when retrieving the app from the timelineserver

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132652#comment-14132652
 ] 

Hudson commented on YARN-2542:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #679 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/679/])
YARN-2542. Fixed NPE when retrieving ApplicationReport from TimeLineServer. 
Contributed by Zhijie Shen (jianhe: rev 
a0ad975ea1e70f9532cf6cb6c1d9d92736ca0ebc)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java


 yarn application -status appId throws NPE when retrieving the app from 
 the timelineserver
 -

 Key: YARN-2542
 URL: https://issues.apache.org/jira/browse/YARN-2542
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: 2.6.0

 Attachments: YARN-2542.1.patch, YARN-2542.2.patch, YARN-2542.3.patch, 
 YARN-2542.4.patch


 yarn application -status appId throws NPE when retrieving the app from 
 the timelineserver. It's broken by YARN-415. When app is finished, there's no 
 usageReport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2523) ResourceManager UI showing negative value for Decommissioned Nodes field

2014-09-13 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-2523:
-
Attachment: YARN-2523.patch

Verified the fix again, decommissioned nodes should not be decremented again 
RMNodeImpl#updateMetricsForRejoinedNode() considering previoud state. Latest 
decommisioned nodes already been updated by AdminService#refreshNodes().

 ResourceManager UI showing negative value for Decommissioned Nodes field
 --

 Key: YARN-2523
 URL: https://issues.apache.org/jira/browse/YARN-2523
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 3.0.0
Reporter: Nishan Shetty
Assignee: Rohith
 Attachments: YARN-2523.patch, YARN-2523.patch


 1. Decommission one NodeManager by configuring ip in excludehost file
 2. Remove ip from excludehost file
 3. Execute -refreshNodes command and restart Decommissioned NodeManager
 Observe that in RM UI negative value for Decommissioned Nodes field is shown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2523) ResourceManager UI showing negative value for Decommissioned Nodes field

2014-09-13 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132688#comment-14132688
 ] 

Rohith commented on YARN-2523:
--

Attached udpated patch. Please review

 ResourceManager UI showing negative value for Decommissioned Nodes field
 --

 Key: YARN-2523
 URL: https://issues.apache.org/jira/browse/YARN-2523
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 3.0.0
Reporter: Nishan Shetty
Assignee: Rohith
 Attachments: YARN-2523.patch, YARN-2523.patch


 1. Decommission one NodeManager by configuring ip in excludehost file
 2. Remove ip from excludehost file
 3. Execute -refreshNodes command and restart Decommissioned NodeManager
 Observe that in RM UI negative value for Decommissioned Nodes field is shown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2484) FileSystemRMStateStore#readFile/writeFile should close FSData(In|Out)putStream in final block

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132719#comment-14132719
 ] 

Hudson commented on YARN-2484:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1895 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1895/])
YARN-2484. FileSystemRMStateStore#readFile/writeFile should close 
FSData(In|Out)putStream in final block. Contributed by Tsuyoshi OZAWA (jlowe: 
rev 78b048393a80a9bd1399d08525590bb211a32d8c)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* hadoop-yarn-project/CHANGES.txt


 FileSystemRMStateStore#readFile/writeFile should close 
 FSData(In|Out)putStream in final block
 -

 Key: YARN-2484
 URL: https://issues.apache.org/jira/browse/YARN-2484
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Trivial
 Fix For: 2.6.0

 Attachments: YARN-2484.1.patch, YARN-2484.2.patch


 File descriptors can leak if exceptions are thrown in these methods.
 {code}
  private byte[] readFile(Path inputPath, long len) throws Exception {
 FSDataInputStream fsIn = fs.open(inputPath);
 // state data will not be that long
 byte[] data = new byte[(int)len];
 fsIn.readFully(data);
 fsIn.close();
 return data;
   }
 {code}
 {code}
   private void writeFile(Path outputPath, byte[] data) throws Exception {
 Path tempPath =
 new Path(outputPath.getParent(), outputPath.getName() + .tmp);
 FSDataOutputStream fsOut = null;
 // This file will be overwritten when app/attempt finishes for saving the
 // final status.
 fsOut = fs.create(tempPath, true);
 fsOut.write(data);
 fsOut.close();
 fs.rename(tempPath, outputPath);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2229) ContainerId can overflow with RM restart

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132718#comment-14132718
 ] 

Hudson commented on YARN-2229:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1895 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1895/])
YARN-2229. Changed the integer field of ContainerId to be long type. 
Contributed by Tsuyoshi OZAWA (jianhe: rev 
3122daa80261b466e309e88d88d1e2c030525e3f)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/preemption/CheckpointAMPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/Epoch.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/NullRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/EpochPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestMaxRunningAppsEnforcer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingRMRestart.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java


 ContainerId can overflow with RM restart
 

 Key: YARN-2229
 URL: https://issues.apache.org/jira/browse/YARN-2229
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 2.6.0

 Attachments: 

[jira] [Commented] (YARN-2547) Cross Origin Filter throws UnsupportedOperationException upon destroy

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132703#comment-14132703
 ] 

Hudson commented on YARN-2547:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1895 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1895/])
YARN-2547. Cross Origin Filter throws UnsupportedOperationException upon 
destroy (Mit Desai via jeagles) (jeagles: rev 
54e5794806bd856da0277510efe63656eed23146)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/CrossOriginFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestCrossOriginFilter.java
* hadoop-yarn-project/CHANGES.txt


 Cross Origin Filter throws UnsupportedOperationException upon destroy
 -

 Key: YARN-2547
 URL: https://issues.apache.org/jira/browse/YARN-2547
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Fix For: 2.6.0

 Attachments: YARN-2547.patch, YARN-2547.patch, YARN-2547.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2456) Possible livelock in CapacityScheduler when RM is recovering apps

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132704#comment-14132704
 ] 

Hudson commented on YARN-2456:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1895 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1895/])
YARN-2456. Possible livelock in CapacityScheduler when RM is recovering (xgong: 
rev e65ae575a059a426c4c38fdabe22a31eabbb349e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java


 Possible livelock in CapacityScheduler when RM is recovering apps
 -

 Key: YARN-2456
 URL: https://issues.apache.org/jira/browse/YARN-2456
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.6.0

 Attachments: YARN-2456.1.patch, YARN-2456.2.patch


 Consider this scenario:
 1. RM is configured with a single queue and only one application can be 
 active at a time.
 2. Submit App1 which uses up the queue's whole capacity
 3. Submit App2 which remains pending.
 4. Restart RM.
 5. App2 is recovered before App1, so App2 is added to the activeApplications 
 list. Now App1 remains pending (because of max-active-app limit)
 6. All containers of App1 are now recovered when NM registers, and use up the 
 whole queue capacity again.
 7. Since the queue is full, App2 cannot proceed to allocate AM container.
 8. In the meanwhile, App1 cannot proceed to become active because of the 
 max-active-app limit 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2542) yarn application -status appId throws NPE when retrieving the app from the timelineserver

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132710#comment-14132710
 ] 

Hudson commented on YARN-2542:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1895 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1895/])
YARN-2542. Fixed NPE when retrieving ApplicationReport from TimeLineServer. 
Contributed by Zhijie Shen (jianhe: rev 
a0ad975ea1e70f9532cf6cb6c1d9d92736ca0ebc)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 yarn application -status appId throws NPE when retrieving the app from 
 the timelineserver
 -

 Key: YARN-2542
 URL: https://issues.apache.org/jira/browse/YARN-2542
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: 2.6.0

 Attachments: YARN-2542.1.patch, YARN-2542.2.patch, YARN-2542.3.patch, 
 YARN-2542.4.patch


 yarn application -status appId throws NPE when retrieving the app from 
 the timelineserver. It's broken by YARN-415. When app is finished, there's no 
 usageReport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2525) yarn logs command gives error on trunk

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132707#comment-14132707
 ] 

Hudson commented on YARN-2525:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1895 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1895/])
YARN-2525. yarn logs command gives error on trunk (Akira AJISAKA via aw) (aw: 
rev 40364dc47c03efa295ae03fe8aa8467017fb6f26)
* hadoop-yarn-project/CHANGES.txt
* hadoop-yarn-project/hadoop-yarn/bin/yarn


 yarn logs command gives error on trunk
 --

 Key: YARN-2525
 URL: https://issues.apache.org/jira/browse/YARN-2525
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scripts
Reporter: Prakash Ramachandran
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: YARN-2525.patch


 yarn logs command (trunk branch) gives an error
 Error: Could not find or load main class 
 org.apache.hadoop.yarn.logaggregation.LogDumper
 instead the class should be org.apache.hadoop.yarn.client.cli.LogsCLI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2528) Cross Origin Filter Http response split vulnerability protection rejects valid origins

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132712#comment-14132712
 ] 

Hudson commented on YARN-2528:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1895 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1895/])
YARN-2528. Relaxed http response split vulnerability protection for the origins 
header and made it accept multiple origins in CrossOriginFilter. Contributed by 
Jonathan Eagles. (zjshen: rev 98588cf044d9908ecf767257c09a52cf17aa2ec2)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestCrossOriginFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/CrossOriginFilter.java


 Cross Origin Filter Http response split vulnerability protection rejects 
 valid origins
 --

 Key: YARN-2528
 URL: https://issues.apache.org/jira/browse/YARN-2528
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Fix For: 2.6.0

 Attachments: YARN-2528-v1.patch, YARN-2528-v2.patch


 URLEncoding is too strong of a protection for HTTP Response Split 
 Vulnerability protection and major browser reject the encoded Origin. An 
 adequate protection is simply to remove all CRs LFs as in the case of PHP's 
 header function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2523) ResourceManager UI showing negative value for Decommissioned Nodes field

2014-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132735#comment-14132735
 ] 

Hadoop QA commented on YARN-2523:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668575/YARN-2523.patch
  against trunk revision 98588cf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4954//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4954//console

This message is automatically generated.

 ResourceManager UI showing negative value for Decommissioned Nodes field
 --

 Key: YARN-2523
 URL: https://issues.apache.org/jira/browse/YARN-2523
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 3.0.0
Reporter: Nishan Shetty
Assignee: Rohith
 Attachments: YARN-2523.patch, YARN-2523.patch


 1. Decommission one NodeManager by configuring ip in excludehost file
 2. Remove ip from excludehost file
 3. Execute -refreshNodes command and restart Decommissioned NodeManager
 Observe that in RM UI negative value for Decommissioned Nodes field is shown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2547) Cross Origin Filter throws UnsupportedOperationException upon destroy

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132742#comment-14132742
 ] 

Hudson commented on YARN-2547:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1870 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1870/])
YARN-2547. Cross Origin Filter throws UnsupportedOperationException upon 
destroy (Mit Desai via jeagles) (jeagles: rev 
54e5794806bd856da0277510efe63656eed23146)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestCrossOriginFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/CrossOriginFilter.java
* hadoop-yarn-project/CHANGES.txt


 Cross Origin Filter throws UnsupportedOperationException upon destroy
 -

 Key: YARN-2547
 URL: https://issues.apache.org/jira/browse/YARN-2547
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Fix For: 2.6.0

 Attachments: YARN-2547.patch, YARN-2547.patch, YARN-2547.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2484) FileSystemRMStateStore#readFile/writeFile should close FSData(In|Out)putStream in final block

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132758#comment-14132758
 ] 

Hudson commented on YARN-2484:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1870 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1870/])
YARN-2484. FileSystemRMStateStore#readFile/writeFile should close 
FSData(In|Out)putStream in final block. Contributed by Tsuyoshi OZAWA (jlowe: 
rev 78b048393a80a9bd1399d08525590bb211a32d8c)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java


 FileSystemRMStateStore#readFile/writeFile should close 
 FSData(In|Out)putStream in final block
 -

 Key: YARN-2484
 URL: https://issues.apache.org/jira/browse/YARN-2484
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
Priority: Trivial
 Fix For: 2.6.0

 Attachments: YARN-2484.1.patch, YARN-2484.2.patch


 File descriptors can leak if exceptions are thrown in these methods.
 {code}
  private byte[] readFile(Path inputPath, long len) throws Exception {
 FSDataInputStream fsIn = fs.open(inputPath);
 // state data will not be that long
 byte[] data = new byte[(int)len];
 fsIn.readFully(data);
 fsIn.close();
 return data;
   }
 {code}
 {code}
   private void writeFile(Path outputPath, byte[] data) throws Exception {
 Path tempPath =
 new Path(outputPath.getParent(), outputPath.getName() + .tmp);
 FSDataOutputStream fsOut = null;
 // This file will be overwritten when app/attempt finishes for saving the
 // final status.
 fsOut = fs.create(tempPath, true);
 fsOut.write(data);
 fsOut.close();
 fs.rename(tempPath, outputPath);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2229) ContainerId can overflow with RM restart

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132757#comment-14132757
 ] 

Hudson commented on YARN-2229:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1870 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1870/])
YARN-2229. Changed the integer field of ContainerId to be long type. 
Contributed by Tsuyoshi OZAWA (jianhe: rev 
3122daa80261b466e309e88d88d1e2c030525e3f)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/Epoch.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/preemption/CheckpointAMPreemptionPolicy.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/NullRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/EpochPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestMaxRunningAppsEnforcer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestWorkPreservingRMRestart.java


 ContainerId can overflow with RM restart
 

 Key: YARN-2229
 URL: https://issues.apache.org/jira/browse/YARN-2229
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 2.6.0

 Attachments: YARN-2229.1.patch, 

[jira] [Commented] (YARN-2542) yarn application -status appId throws NPE when retrieving the app from the timelineserver

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132749#comment-14132749
 ] 

Hudson commented on YARN-2542:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1870 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1870/])
YARN-2542. Fixed NPE when retrieving ApplicationReport from TimeLineServer. 
Contributed by Zhijie Shen (jianhe: rev 
a0ad975ea1e70f9532cf6cb6c1d9d92736ca0ebc)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 yarn application -status appId throws NPE when retrieving the app from 
 the timelineserver
 -

 Key: YARN-2542
 URL: https://issues.apache.org/jira/browse/YARN-2542
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: 2.6.0

 Attachments: YARN-2542.1.patch, YARN-2542.2.patch, YARN-2542.3.patch, 
 YARN-2542.4.patch


 yarn application -status appId throws NPE when retrieving the app from 
 the timelineserver. It's broken by YARN-415. When app is finished, there's no 
 usageReport.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2456) Possible livelock in CapacityScheduler when RM is recovering apps

2014-09-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132743#comment-14132743
 ] 

Hudson commented on YARN-2456:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1870 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1870/])
YARN-2456. Possible livelock in CapacityScheduler when RM is recovering (xgong: 
rev e65ae575a059a426c4c38fdabe22a31eabbb349e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java


 Possible livelock in CapacityScheduler when RM is recovering apps
 -

 Key: YARN-2456
 URL: https://issues.apache.org/jira/browse/YARN-2456
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.6.0

 Attachments: YARN-2456.1.patch, YARN-2456.2.patch


 Consider this scenario:
 1. RM is configured with a single queue and only one application can be 
 active at a time.
 2. Submit App1 which uses up the queue's whole capacity
 3. Submit App2 which remains pending.
 4. Restart RM.
 5. App2 is recovered before App1, so App2 is added to the activeApplications 
 list. Now App1 remains pending (because of max-active-app limit)
 6. All containers of App1 are now recovered when NM registers, and use up the 
 whole queue capacity again.
 7. Since the queue is full, App2 cannot proceed to allocate AM container.
 8. In the meanwhile, App1 cannot proceed to become active because of the 
 max-active-app limit 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2550) TestAMRestart fails intermittently

2014-09-13 Thread Rohith (JIRA)
Rohith created YARN-2550:


 Summary: TestAMRestart fails intermittently
 Key: YARN-2550
 URL: https://issues.apache.org/jira/browse/YARN-2550
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Rohith


testShouldNotCountFailureToMaxAttemptRetry(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart)
  Time elapsed: 50.64 sec   FAILURE!
java.lang.AssertionError: AppAttempt state is not correct (timedout) 
expected:ALLOCATED but was:SCHEDULED
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:84)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:417)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAM(MockRM.java:582)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAndRegisterAM(MockRM.java:589)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForNewAMToLaunchAndRegister(MockRM.java:182)
at 
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testShouldNotCountFailureToMaxAttemptRetry(TestAMRestart.java:402)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2523) ResourceManager UI showing negative value for Decommissioned Nodes field

2014-09-13 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132773#comment-14132773
 ] 

Rohith commented on YARN-2523:
--

Test failure checked, it is not related to fix. I raised new ticket YARN-2550 
track it

 ResourceManager UI showing negative value for Decommissioned Nodes field
 --

 Key: YARN-2523
 URL: https://issues.apache.org/jira/browse/YARN-2523
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 3.0.0
Reporter: Nishan Shetty
Assignee: Rohith
 Attachments: YARN-2523.patch, YARN-2523.patch


 1. Decommission one NodeManager by configuring ip in excludehost file
 2. Remove ip from excludehost file
 3. Execute -refreshNodes command and restart Decommissioned NodeManager
 Observe that in RM UI negative value for Decommissioned Nodes field is shown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2404) Remove ApplicationAttemptState and ApplicationState class in RMStateStore class

2014-09-13 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2404:
-
Attachment: YARN-2404.4.patch

Rebased on trunk.

 Remove ApplicationAttemptState and ApplicationState class in RMStateStore 
 class 
 

 Key: YARN-2404
 URL: https://issues.apache.org/jira/browse/YARN-2404
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2404.1.patch, YARN-2404.2.patch, YARN-2404.3.patch, 
 YARN-2404.4.patch


 We can remove ApplicationState and ApplicationAttemptState class in 
 RMStateStore, given that we already have ApplicationStateData and 
 ApplicationAttemptStateData records. we may just replace ApplicationState 
 with ApplicationStateData, similarly for ApplicationAttemptState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2404) Remove ApplicationAttemptState and ApplicationState class in RMStateStore class

2014-09-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132802#comment-14132802
 ] 

Hadoop QA commented on YARN-2404:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12668577/YARN-2404.4.patch
  against trunk revision 98588cf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart

  The following test timeouts occurred in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/4955//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/4955//console

This message is automatically generated.

 Remove ApplicationAttemptState and ApplicationState class in RMStateStore 
 class 
 

 Key: YARN-2404
 URL: https://issues.apache.org/jira/browse/YARN-2404
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2404.1.patch, YARN-2404.2.patch, YARN-2404.3.patch, 
 YARN-2404.4.patch


 We can remove ApplicationState and ApplicationAttemptState class in 
 RMStateStore, given that we already have ApplicationStateData and 
 ApplicationAttemptStateData records. we may just replace ApplicationState 
 with ApplicationStateData, similarly for ApplicationAttemptState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2518) Support in-process container executor

2014-09-13 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132809#comment-14132809
 ] 

Naganarasimha G R commented on YARN-2518:
-

Apart from security , Resource isolation , preemption many things don't work in 
the proposed solution. I would prefer AM  its containers are coded to reuse 
the containers if launching of new containers are having performance problems.

 Support in-process container executor
 -

 Key: YARN-2518
 URL: https://issues.apache.org/jira/browse/YARN-2518
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Affects Versions: 2.5.0
 Environment: Linux, Windows
Reporter: BoYang
Priority: Minor
  Labels: container, dispatch, in-process, job, node

 Node Manage always creates a new process for a new application. We have hit a 
 scenario where we want the node manager to execute the application inside its 
 own process, so we get fast response time. It would be nice if Node Manager 
 or YARN can provide native support for that.
 In general, the scenario is that we have a long running process which can 
 accept requests and process the requests inside its own process. Since YARN 
 is good at scheduling jobs, we want to use YARN to dispatch jobs (e.g. 
 requests in JSON) to the long running process. In that case, we do not want 
 YARN container to spin up a new process for each request. Instead, we want 
 YARN container to send the request to the long running process for further 
 processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2377) Localization exception stack traces are not passed as diagnostic info

2014-09-13 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132856#comment-14132856
 ] 

Naganarasimha G R commented on YARN-2377:
-

Hi [~jira.shegalov] 
  In your stringify method you are using 
SerializedExceptionPBImpl#getMessage, #getCause #getRemoteTrace. Will that not 
deserialize the exception ?

 Localization exception stack traces are not passed as diagnostic info
 -

 Key: YARN-2377
 URL: https://issues.apache.org/jira/browse/YARN-2377
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: YARN-2377.v01.patch


 In the Localizer log one can only see this kind of message
 {code}
 14/07/31 10:29:00 INFO localizer.ResourceLocalizationService: DEBUG: FAILED { 
 hdfs://ha-nn-uri-0:8020/tmp/hadoop-yarn/staging/gshegalov/.staging/job_1406825443306_0004/job.jar,
  1406827248944, PATTERN, (?:classes/|lib/).* }, java.net.UnknownHos 
 tException: ha-nn-uri-0
 {code}
 And then only {{ java.net.UnknownHostException: ha-nn-uri-0}} message is 
 propagated as diagnostics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2531) CGroups - Admins should be allowed to enforce strict cpu limits

2014-09-13 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132870#comment-14132870
 ] 

Naganarasimha G R commented on YARN-2531:
-

Hi [~vvasudev]
   Issue and fix seems to be same as YARN-810 ?

 CGroups - Admins should be allowed to enforce strict cpu limits
 ---

 Key: YARN-2531
 URL: https://issues.apache.org/jira/browse/YARN-2531
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-yarn-2531.0.patch


 From YARN-2440 -
 {quote} 
 The other dimension to this is determinism w.r.t performance. Limiting to 
 allocated cores overall (as well as per container later) helps orgs run 
 workloads and reason about them deterministically. One of the examples is 
 benchmarking apps, but deterministic execution is a desired option beyond 
 benchmarks too.
 {quote}
 It would be nice to have an option to let admins to enforce strict cpu limits 
 for apps for things like benchmarking, etc. By default this flag should be 
 off so that containers can use available cpu but admin can turn the flag on 
 to determine worst case performance, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2377) Localization exception stack traces are not passed as diagnostic info

2014-09-13 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14132994#comment-14132994
 ] 

Gera Shegalov commented on YARN-2377:
-

Hi [~Naganarasimha], there is no deserialization in a sense of converting bytes 
to the original exception class. This fields are already strings in 
yarn_protos.proto:
{code}
 33 message SerializedExceptionProto {  

 34   optional string message = 1;  

 35   optional string trace = 2;

 36   optional string class_name = 3;   

 37   optional SerializedExceptionProto cause = 4;  

 38 }
{code}

 Localization exception stack traces are not passed as diagnostic info
 -

 Key: YARN-2377
 URL: https://issues.apache.org/jira/browse/YARN-2377
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: YARN-2377.v01.patch


 In the Localizer log one can only see this kind of message
 {code}
 14/07/31 10:29:00 INFO localizer.ResourceLocalizationService: DEBUG: FAILED { 
 hdfs://ha-nn-uri-0:8020/tmp/hadoop-yarn/staging/gshegalov/.staging/job_1406825443306_0004/job.jar,
  1406827248944, PATTERN, (?:classes/|lib/).* }, java.net.UnknownHos 
 tException: ha-nn-uri-0
 {code}
 And then only {{ java.net.UnknownHostException: ha-nn-uri-0}} message is 
 propagated as diagnostics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2518) Support in-process container executor

2014-09-13 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-2518:
--
Labels:   (was: container dispatch in-process job node)

Today's containers in YARN are necessarily process-trees - meaning they are 
separate processes with their own command-lines, environment etc. If you want 
an 'in-process' container, given NMs are JVMs, you need native support for some 
sort of a first-class JVM container in YARN. This doesn't exist today, so this 
ticket doesn't make much sense as of now.

Your use-case is valid. And as the above comment says, the solution is to 
launch a container and reuse it.

One other relevant work you may be interested in is YARN-896.

 Support in-process container executor
 -

 Key: YARN-2518
 URL: https://issues.apache.org/jira/browse/YARN-2518
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Affects Versions: 2.5.0
 Environment: Linux, Windows
Reporter: BoYang
Priority: Minor

 Node Manage always creates a new process for a new application. We have hit a 
 scenario where we want the node manager to execute the application inside its 
 own process, so we get fast response time. It would be nice if Node Manager 
 or YARN can provide native support for that.
 In general, the scenario is that we have a long running process which can 
 accept requests and process the requests inside its own process. Since YARN 
 is good at scheduling jobs, we want to use YARN to dispatch jobs (e.g. 
 requests in JSON) to the long running process. In that case, we do not want 
 YARN container to spin up a new process for each request. Instead, we want 
 YARN container to send the request to the long running process for further 
 processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2518) Support in-process container executor

2014-09-13 Thread BoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14133089#comment-14133089
 ] 

BoYang commented on YARN-2518:
--

Yes, YARN-896 is a similar scenario. Thanks for bring in that. Is there anyone 
currently looking at that?

By the way, can you provide more details about launch a container and reuse 
it? Let's say the first job comes in and launches an application master 
container (process). Now the second job comes in. YARN will still launch 
another application master container (process) for the second one. How to make 
the second job reuse the previous application master container?

If you mean reuse the non-application-master containers, that might be 
possible. But for my scenario, it needs to reuse the application master 
container. Is there anyway to do that?

 Support in-process container executor
 -

 Key: YARN-2518
 URL: https://issues.apache.org/jira/browse/YARN-2518
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Affects Versions: 2.5.0
 Environment: Linux, Windows
Reporter: BoYang
Priority: Minor

 Node Manage always creates a new process for a new application. We have hit a 
 scenario where we want the node manager to execute the application inside its 
 own process, so we get fast response time. It would be nice if Node Manager 
 or YARN can provide native support for that.
 In general, the scenario is that we have a long running process which can 
 accept requests and process the requests inside its own process. Since YARN 
 is good at scheduling jobs, we want to use YARN to dispatch jobs (e.g. 
 requests in JSON) to the long running process. In that case, we do not want 
 YARN container to spin up a new process for each request. Instead, we want 
 YARN container to send the request to the long running process for further 
 processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2513) Host framework UIs in YARN for use with the ATS

2014-09-13 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14133096#comment-14133096
 ] 

Zhijie Shen commented on YARN-2513:
---

[~jeagles], thanks for working on this feature, which seems to be interesting. 
The benefit I can see with integrating the per-framework UI widget into the 
timeline server is to relieve the framework from deploying its own web server, 
and to prevent additional data move from the timeline server to the framework 
web server, and finally to the user.

To enable the web plugin, so far I can think of the following things that it's 
good to address.

1. Security: there're three potential issues. a) Do we want to use Hadoop http 
authentication to protect the pre-framework web plugin? b) We don't page-level 
or url matching access authorization, such that, for example, UserA can only 
access the web pages of it's authorized web plugin. c) Currently, everything 
inside timeline server belongs to YARN, such that we don't limit its access to 
the resources internally. Web plugin is hosted by the timeline server on behalf 
of the framework, and it should only have the access to the resources granted 
to it. For example, Tez webUI should only have access to the tez metrics in the 
timeline store.

2. Isolation: for some reason, the web plugin is going to crash, we should make 
sure it is not going to affect other components in the timeline server and 
other web plugins. Moreover, if multiple web plugins are hosted in the timeline 
server, we need to take care of their competing for the web server resources, 
preventing starvation.

3. Scalability: now everything is hosted in a single web server container. 
Hosting framework UIs will drive more traffic to the timeline server. We may 
want to scale up web server instances to handle users' requests. In addition, 
it's good think of whether we want distribute the workload according the 
functions: some of them serves raw REST APIs, others serve web UI of framework 
1, 2, 3 and the remaining ones serve web UI of framework 4, 5, 6.

 Host framework UIs in YARN for use with the ATS
 ---

 Key: YARN-2513
 URL: https://issues.apache.org/jira/browse/YARN-2513
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Attachments: YARN-2513-v1.patch


 Allow for pluggable UIs as described by TEZ-8. Yarn can provide the 
 infrastructure to host java script and possible java UIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)