[jira] [Commented] (YARN-3600) AM container link is broken (on a killed application, at least)

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14536452#comment-14536452
 ] 

Hudson commented on YARN-3600:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #922 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/922/])
YARN-3600. AM container link is broken (Naganarasimha G R via tgraves (tgraves: 
rev 5d708a4725529cf09d2dd8b5b4aa3542cc8610b0)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java


 AM container link is broken (on a killed application, at least)
 ---

 Key: YARN-3600
 URL: https://issues.apache.org/jira/browse/YARN-3600
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Sergey Shelukhin
Assignee: Naganarasimha G R
 Fix For: 2.8.0

 Attachments: YARN-3600.20150508-1.patch


 Running some fairly recent (couple weeks ago) version of 2.8.0-SNAPSHOT. 
 I have an application that ran fine for a while and then I yarn kill-ed it. 
 Now when I go to the only app attempt URL (like so: http://(snip RM host 
 name):8088/cluster/appattempt/appattempt_1429683757595_0795_01)
 I see:
 AM Container: container_1429683757595_0795_01_01
 Node: N/A 
 and the container link goes to {noformat}http://(snip RM host 
 name):8088/cluster/N/A
 {noformat}
 which obviously doesn't work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3602) TestResourceLocalizationService.testPublicResourceInitializesLocalDir fails Intermittently due to IOException from cleanup

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14536704#comment-14536704
 ] 

Hudson commented on YARN-3602:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2138 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2138/])
YARN-3602. 
TestResourceLocalizationService.testPublicResourceInitializesLocalDir fails 
Intermittently due to IOException from cleanup. Contributed by zhihai xu 
(xgong: rev 333f9a896d8a4407ce69cfd0dc8314587a339233)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
* hadoop-yarn-project/CHANGES.txt


 TestResourceLocalizationService.testPublicResourceInitializesLocalDir fails 
 Intermittently due to IOException from cleanup
 --

 Key: YARN-3602
 URL: https://issues.apache.org/jira/browse/YARN-3602
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: YARN-3602.000.patch


 ResourceLocalizationService.testPublicResourceInitializesLocalDir fails 
 Intermittently due to IOException from cleanup. The stack trace is the 
 following from test report at
 https://builds.apache.org/job/PreCommit-YARN-Build/7729/testReport/org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer/TestResourceLocalizationService/testPublicResourceInitializesLocalDir/
 {code}
 Error Message
 Unable to delete directory 
 target/org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService/2/filecache.
 Stacktrace
 java.io.IOException: Unable to delete directory 
 target/org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService/2/filecache.
   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1541)
   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
   at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
   at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
   at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.cleanup(TestResourceLocalizationService.java:187)
 {code}
 It looks like we can safely ignore the IOException in cleanup which is called 
 after test.
 The IOException may be due to the test machine environment because 
 TestResourceLocalizationService/2/filecache is created by 
 ResourceLocalizationService#initializeLocalDir.
 testPublicResourceInitializesLocalDir created 0/filecache, 1/filecache, 
 2/filecache and 3/filecache
 {code}
 for (int i = 0; i  4; ++i) {
   localDirs.add(lfs.makeQualified(new Path(basedir, i + )));
   sDirs[i] = localDirs.get(i).toString();
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2784) Make POM project names consistent

2015-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14536595#comment-14536595
 ] 

Hudson commented on YARN-2784:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #180 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/180/])
YARN-2784. Make POM project names consistent. Contributed by Rohith. (devaraj: 
rev 241a72af0dd19040be333d77749f8be17b8aafc7)
* hadoop-yarn-project/hadoop-yarn/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* hadoop-yarn-project/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
* hadoop-yarn-project/CHANGES.txt
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml


 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
 Fix For: 2.8.0

 Attachments: 0002-YARN-2784.patch, YARN-2784-branch-2.patch, 
 YARN-2784-branch-2.patch, YARN-2784.patch, YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-221) NM should provide a way for AM to tell it not to aggregate logs.

2015-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14536271#comment-14536271
 ] 

Hadoop QA commented on YARN-221:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 10s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m 47s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 46s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | yarn tests |   0m 25s | Tests passed in 
hadoop-yarn-api. |
| {color:green}+1{color} | yarn tests |   1m 56s | Tests passed in 
hadoop-yarn-common. |
| {color:red}-1{color} | yarn tests |   5m 57s | Tests failed in 
hadoop-yarn-server-nodemanager. |
| | |  49m 26s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731684/YARN-221-trunk-v4.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 02a4a22 |
| hadoop-yarn-api test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-api.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7846/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/7846/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7846/console |


This message was automatically generated.

 NM should provide a way for AM to tell it not to aggregate logs.
 

 Key: YARN-221
 URL: https://issues.apache.org/jira/browse/YARN-221
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Reporter: Robert Joseph Evans
Assignee: Ming Ma
  Labels: BB2015-05-TBR
 Attachments: YARN-221-trunk-v1.patch, YARN-221-trunk-v2.patch, 
 YARN-221-trunk-v3.patch, YARN-221-trunk-v4.patch


 The NodeManager should provide a way for an AM to tell it that either the 
 logs should not be aggregated, that they should be aggregated with a high 
 priority, or that they should be aggregated but with a lower priority.  The 
 AM should be able to do this in the ContainerLaunch context to provide a 
 default value, but should also be able to update the value when the container 
 is released.
 This would allow for the NM to not aggregate logs in some cases, and avoid 
 connection to the NN at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)