[jira] [Commented] (YARN-3044) [Event producers] Implement RM writing app lifecycle events to ATS

2015-05-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533942#comment-14533942
 ] 

Sangjin Lee commented on YARN-3044:
---

Overall it looks good. I think it's real close. I appreciate your patience on 
this JIRA. Some minor and not-so-minor comments follow...

(TimelineMetric.java)
- l.48: I we can drop this comment. It is really a reason for the change, but 
for the code itself I don't think it needs the rationale.

(YarnConfiguration.java)
- I continue to be somewhat puzzled by the word container metrics, as it is 
used really to emit container entities and events, not metrics. I am OK with 
the name as it is for now, and I understand that there is a precedent, but I 
hope at least we can revisit these names at some point and normalize them. Let 
me know what you think.

(SystemMetricsPublisher.java)
- l. 104: nit: space after if

(TimelineServiceV2Publisher.java)
- This hasn't been discussed as much, but I'm wondering when and how we should 
set the child entities. For example, when we create a new app attempt entity, 
do we want to add it to the app entity as a child? The purpose is quick 
navigation to children. This question might be a little beyond the scope of 
this patch, and I'm fine with working on that in a separate JIRA.
- l.232: Some of the info entries are redundant from the app attempt registered 
event. Are they needed?
- l.330: Now that this is taken care of by the RM timeline collector manager, 
do we need this method any more? I don't think it is being called.

 [Event producers] Implement RM writing app lifecycle events to ATS
 --

 Key: YARN-3044
 URL: https://issues.apache.org/jira/browse/YARN-3044
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Naganarasimha G R
  Labels: BB2015-05-TBR
 Attachments: YARN-3044-YARN-2928.004.patch, 
 YARN-3044-YARN-2928.005.patch, YARN-3044-YARN-2928.006.patch, 
 YARN-3044.20150325-1.patch, YARN-3044.20150406-1.patch, 
 YARN-3044.20150416-1.patch


 Per design in YARN-2928, implement RM writing app lifecycle events to ATS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3358) Audit log not present while refreshing Service ACLs

2015-05-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-3358:

Summary: Audit log not present while refreshing Service ACLs  (was: Audit 
log not present while refreshing Service ACLs')

 Audit log not present while refreshing Service ACLs
 ---

 Key: YARN-3358
 URL: https://issues.apache.org/jira/browse/YARN-3358
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: YARN-3358.01.patch


 There should be a success audit log in AdminService#refreshServiceAcls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3358) Audit log not present while refreshing Service ACLs'

2015-05-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-3358:

Hadoop Flags: Reviewed

+1, patch looks good to me, will commit it.

 Audit log not present while refreshing Service ACLs'
 

 Key: YARN-3358
 URL: https://issues.apache.org/jira/browse/YARN-3358
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: YARN-3358.01.patch


 There should be a success audit log in AdminService#refreshServiceAcls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3554) Default value for maximum nodemanager connect wait time is too high

2015-05-08 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533959#comment-14533959
 ] 

Naganarasimha G R commented on YARN-3554:
-

Hi [~jlowe], As 3 mins is fine with [~vinodkv] can we have this patch in ? 

 Default value for maximum nodemanager connect wait time is too high
 ---

 Key: YARN-3554
 URL: https://issues.apache.org/jira/browse/YARN-3554
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Naganarasimha G R
  Labels: BB2015-05-TBR, newbie
 Attachments: YARN-3554-20150429-2.patch, YARN-3554.20150429-1.patch


 The default value for yarn.client.nodemanager-connect.max-wait-ms is 90 
 msec or 15 minutes, which is way too high.  The default container expiry time 
 from the RM and the default task timeout in MapReduce are both only 10 
 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3476) Nodemanager can fail to delete local logs if log aggregation fails

2015-05-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-3476:
-
Attachment: 0002-YARN-3476.patch

Updated the patch adding LOG on exception. Kindly review the updated patch

 Nodemanager can fail to delete local logs if log aggregation fails
 --

 Key: YARN-3476
 URL: https://issues.apache.org/jira/browse/YARN-3476
 Project: Hadoop YARN
  Issue Type: Bug
  Components: log-aggregation, nodemanager
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Rohith
  Labels: BB2015-05-TBR
 Attachments: 0001-YARN-3476.patch, 0001-YARN-3476.patch, 
 0002-YARN-3476.patch


 If log aggregation encounters an error trying to upload the file then the 
 underlying TFile can throw an illegalstateexception which will bubble up 
 through the top of the thread and prevent the application logs from being 
 deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3358) Audit log not present while refreshing Service ACLs

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533977#comment-14533977
 ] 

Hudson commented on YARN-3358:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7770 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7770/])
YARN-3358. Audit log not present while refreshing Service ACLs. (devaraj: rev 
ef3d66d4624d360e75c016e36824a6782d6a9746)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java


 Audit log not present while refreshing Service ACLs
 ---

 Key: YARN-3358
 URL: https://issues.apache.org/jira/browse/YARN-3358
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.1

 Attachments: YARN-3358.01.patch


 There should be a success audit log in AdminService#refreshServiceAcls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3358) Audit log not present while refreshing Service ACLs

2015-05-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-3358:

Fix Version/s: 2.7.1
   Labels:   (was: BB2015-05-RFC)

 Audit log not present while refreshing Service ACLs
 ---

 Key: YARN-3358
 URL: https://issues.apache.org/jira/browse/YARN-3358
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.1

 Attachments: YARN-3358.01.patch


 There should be a success audit log in AdminService#refreshServiceAcls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3589) RM and AH web UI display DOCTYPE

2015-05-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-3589:
-
Labels: BB2015-05-RFC  (was: BB2015-05-TBR)

 RM and AH web UI display DOCTYPE
 

 Key: YARN-3589
 URL: https://issues.apache.org/jira/browse/YARN-3589
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.8.0
Reporter: Rohith
Assignee: Rohith
  Labels: BB2015-05-RFC
 Attachments: 0001-YARN-3589.patch, YARN-3589.PNG


 RM web app UI display {{!DOCTYPE html PUBLIC -\/\/W3C\/\/DTD HTML 
 4.01\/\/EN http:\/\/www.w3.org\/TR\/html4\/strict.dtd}} which is not 
 necessary.
 This is because, content of html page is escaped which result browser cant 
 not parse it. Any content which is escaped should be with the HTML block , 
 but doc type is above html which browser can't parse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3591) Resource Localisation on a bad disk causes subsequent containers failure

2015-05-08 Thread Lavkesh Lahngir (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lavkesh Lahngir updated YARN-3591:
--
Attachment: 0001-YARN-3591.1.patch

 Resource Localisation on a bad disk causes subsequent containers failure 
 -

 Key: YARN-3591
 URL: https://issues.apache.org/jira/browse/YARN-3591
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lavkesh Lahngir
 Attachments: 0001-YARN-3591.1.patch, 0001-YARN-3591.patch


 It happens when a resource is localised on the disk, after localising that 
 disk has gone bad. NM keeps paths for localised resources in memory.  At the 
 time of resource request isResourcePresent(rsrc) will be called which calls 
 file.exists() on the localised path.
 In some cases when disk has gone bad, inodes are stilled cached and 
 file.exists() returns true. But at the time of reading, file will not open.
 Note: file.exists() actually calls stat64 natively which returns true because 
 it was able to find inode information from the OS.
 A proposal is to call file.list() on the parent path of the resource, which 
 will call open() natively. If the disk is good it should return an array of 
 paths with length at-least 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3591) Resource Localisation on a bad disk causes subsequent containers failure

2015-05-08 Thread Lavkesh Lahngir (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lavkesh Lahngir updated YARN-3591:
--
Attachment: (was: 0001-YARN-3591.patch.1)

 Resource Localisation on a bad disk causes subsequent containers failure 
 -

 Key: YARN-3591
 URL: https://issues.apache.org/jira/browse/YARN-3591
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lavkesh Lahngir
 Attachments: 0001-YARN-3591.1.patch, 0001-YARN-3591.patch


 It happens when a resource is localised on the disk, after localising that 
 disk has gone bad. NM keeps paths for localised resources in memory.  At the 
 time of resource request isResourcePresent(rsrc) will be called which calls 
 file.exists() on the localised path.
 In some cases when disk has gone bad, inodes are stilled cached and 
 file.exists() returns true. But at the time of reading, file will not open.
 Note: file.exists() actually calls stat64 natively which returns true because 
 it was able to find inode information from the OS.
 A proposal is to call file.list() on the parent path of the resource, which 
 will call open() natively. If the disk is good it should return an array of 
 paths with length at-least 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3170) YARN architecture document needs updating

2015-05-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534137#comment-14534137
 ] 

Brahma Reddy Battula commented on YARN-3170:


[~ozawa] thanks a lot for taking look into this issue..Updated the patch based 
on your comment..Kindly Review..

 YARN architecture document needs updating
 -

 Key: YARN-3170
 URL: https://issues.apache.org/jira/browse/YARN-3170
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3170-002.patch, YARN-3170.patch


 The marketing paragraph at the top, NextGen MapReduce, etc are all 
 marketing rather than actual descriptions. It also needs some general 
 updates, esp given it reads as though 0.23 was just released yesterday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1019) YarnConfiguration validation for local disk path and http addresses.

2015-05-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534149#comment-14534149
 ] 

Sunil G commented on YARN-1019:
---

I would like to take over this as part of bugBash. Please reassign the same if 
you are working in it.

I am writing test case for this fix and will upload a patch soon.

 YarnConfiguration validation for local disk path and http addresses.
 

 Key: YARN-1019
 URL: https://issues.apache.org/jira/browse/YARN-1019
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Omkar Vinit Joshi
Priority: Minor
  Labels: BB2015-05-TBR, newbie
 Attachments: YARN-1019.0.patch


 Today we are not validating certain configuration parameters set in 
 yarn-site.xml. 1) Configurations related to paths... such as local-dirs, 
 log-dirs.. Our NM crashes during startup if they are set to relative paths 
 rather than absolute paths. To avoid such failures we can enforce checks 
 (absolute paths) before startup . i.e. before we actually startup...( i.e. 
 directory handler creating directories).
 2) Also for all the parameters using hostname:port unless we are ok with 
 default port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) drop the useless yarn overview document

2015-05-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534160#comment-14534160
 ] 

Akira AJISAKA commented on YARN-3169:
-

Looks good to me, +1 pending Jenkins.

 drop the useless yarn overview document
 ---

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2784) Make POM project names consistent

2015-05-08 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534181#comment-14534181
 ] 

Rohith commented on YARN-2784:
--

[~devaraj.k] thanks for the review, I updated the patch with {{Apache Hadoop 
YARN Project}} for both trunk and branch-2

 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0002-YARN-2784.patch, YARN-2784-branch-2.patch, 
 YARN-2784-branch-2.patch, YARN-2784.patch, YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) drop the useless yarn overview document

2015-05-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534129#comment-14534129
 ] 

Akira AJISAKA commented on YARN-3169:
-

You can edit hadoop-project/src/site/site.xml. 

 drop the useless yarn overview document
 ---

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3169-002.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-1019) YarnConfiguration validation for local disk path and http addresses.

2015-05-08 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-1019:
-

Assignee: Sunil G

 YarnConfiguration validation for local disk path and http addresses.
 

 Key: YARN-1019
 URL: https://issues.apache.org/jira/browse/YARN-1019
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Omkar Vinit Joshi
Assignee: Sunil G
Priority: Minor
  Labels: BB2015-05-TBR, newbie
 Attachments: YARN-1019.0.patch


 Today we are not validating certain configuration parameters set in 
 yarn-site.xml. 1) Configurations related to paths... such as local-dirs, 
 log-dirs.. Our NM crashes during startup if they are set to relative paths 
 rather than absolute paths. To avoid such failures we can enforce checks 
 (absolute paths) before startup . i.e. before we actually startup...( i.e. 
 directory handler creating directories).
 2) Also for all the parameters using hostname:port unless we are ok with 
 default port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3590) Change the attribute maxRunningApps of FairSchudeler queue,it should be take effect for application in queue,but not

2015-05-08 Thread zhoulinlin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534037#comment-14534037
 ] 

zhoulinlin commented on YARN-3590:
--

Yes. It duplicates YARN-3057. Thanks!

 Change the attribute maxRunningApps  of FairSchudeler queue,it should be 
 take effect for application in queue,but not
 ---

 Key: YARN-3590
 URL: https://issues.apache.org/jira/browse/YARN-3590
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.5.2
Reporter: zhoulinlin

 Change the queue attribute maxRunningApps  for  FairSchuduler, and then 
 refresh queues, it should effect the application in queue immediately,but not.
 To take effect, the condition is that another application move from this 
 queue. 
 For example:the maxRunningApps is 0, summbit a application A to this queue, 
 it can't be run. Then change maxRunningApps from 0 to 2 and refresh quque, 
 application A should be run,but not. If you summit another application B,when 
 application B commplete, application A run .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2784) Make POM project names consistent

2015-05-08 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-2784:

Labels: BB2015-05-TBR  (was: BB2015-05-RFC)

Thanks [~rohithsharma] for the patch. The patch looks good to me except the 
below,

Is there any reason for adding 'POM' as part of name, would it be ok if we 
mention like this Apache Hadoop YARN Project?
{code:xml}
-  namehadoop-yarn-project/name
+  nameApache Hadoop YARN Project POM/name
{code}

And also this patch is not applying to branch-2, can you provide patch for 
branch-2 as well?


 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0002-YARN-2784.patch, YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) drop the useless yarn overview document

2015-05-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534146#comment-14534146
 ] 

Brahma Reddy Battula commented on YARN-3169:


Thanks a lot [~ajisakaa] for your pointer.. Updated the patch based on your 
comment..Kindly review.. thanks

 drop the useless yarn overview document
 ---

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3169) drop the useless yarn overview document

2015-05-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated YARN-3169:
---
Attachment: YARN-3169-003.patch

 drop the useless yarn overview document
 ---

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) drop the useless yarn overview document

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534184#comment-14534184
 ] 

Tsuyoshi Ozawa commented on YARN-3169:
--

+1

 drop the useless yarn overview document
 ---

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3589) RM and AH web UI display DOCTYPE wrongly

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534203#comment-14534203
 ] 

Hudson commented on YARN-3589:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7772 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7772/])
YARN-3589. RM and AH web UI display DOCTYPE wrongly. Contbituted by Rohith. 
(ozawa: rev f26700f2878f4374c68e97ee00205eda5a6d022c)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/HtmlPage.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java


 RM and AH web UI display DOCTYPE wrongly
 

 Key: YARN-3589
 URL: https://issues.apache.org/jira/browse/YARN-3589
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.8.0
Reporter: Rohith
Assignee: Rohith
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: 0001-YARN-3589.patch, YARN-3589.PNG


 RM web app UI display {{!DOCTYPE html PUBLIC -\/\/W3C\/\/DTD HTML 
 4.01\/\/EN http:\/\/www.w3.org\/TR\/html4\/strict.dtd}} which is not 
 necessary.
 This is because, content of html page is escaped which result browser cant 
 not parse it. Any content which is escaped should be with the HTML block , 
 but doc type is above html which browser can't parse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3589) RM and AH web UI display DOCTYPE

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534006#comment-14534006
 ] 

Tsuyoshi Ozawa commented on YARN-3589:
--

+1, committing this shortly.

 RM and AH web UI display DOCTYPE
 

 Key: YARN-3589
 URL: https://issues.apache.org/jira/browse/YARN-3589
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.8.0
Reporter: Rohith
Assignee: Rohith
  Labels: BB2015-05-RFC
 Attachments: 0001-YARN-3589.patch, YARN-3589.PNG


 RM web app UI display {{!DOCTYPE html PUBLIC -\/\/W3C\/\/DTD HTML 
 4.01\/\/EN http:\/\/www.w3.org\/TR\/html4\/strict.dtd}} which is not 
 necessary.
 This is because, content of html page is escaped which result browser cant 
 not parse it. Any content which is escaped should be with the HTML block , 
 but doc type is above html which browser can't parse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3170) YARN architecture document needs updating

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534119#comment-14534119
 ] 

Tsuyoshi Ozawa commented on YARN-3170:
--

[~brahmareddy] thank you for taking this issue.

{code}
+Apache Hadoop Yarn
{code}

Yarn should be YARN here.  +1 after the comment is addressed.

 YARN architecture document needs updating
 -

 Key: YARN-3170
 URL: https://issues.apache.org/jira/browse/YARN-3170
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3170.patch


 The marketing paragraph at the top, NextGen MapReduce, etc are all 
 marketing rather than actual descriptions. It also needs some general 
 updates, esp given it reads as though 0.23 was just released yesterday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3358) Audit log not present while refreshing Service ACLs

2015-05-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534134#comment-14534134
 ] 

Varun Saxena commented on YARN-3358:


Thanks for the commit [~devaraj.k]

 Audit log not present while refreshing Service ACLs
 ---

 Key: YARN-3358
 URL: https://issues.apache.org/jira/browse/YARN-3358
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.1

 Attachments: YARN-3358.01.patch


 There should be a success audit log in AdminService#refreshServiceAcls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2784) Make POM project names consistent

2015-05-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-2784:
-
Attachment: YARN-2784.patch
YARN-2784-branch-2.patch

 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0002-YARN-2784.patch, YARN-2784-branch-2.patch, 
 YARN-2784-branch-2.patch, YARN-2784.patch, YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3554) Default value for maximum nodemanager connect wait time is too high

2015-05-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3554:

Labels: BB2015-05-RFC newbie  (was: BB2015-05-TBR newbie)

 Default value for maximum nodemanager connect wait time is too high
 ---

 Key: YARN-3554
 URL: https://issues.apache.org/jira/browse/YARN-3554
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Naganarasimha G R
  Labels: BB2015-05-RFC, newbie
 Attachments: YARN-3554-20150429-2.patch, YARN-3554.20150429-1.patch


 The default value for yarn.client.nodemanager-connect.max-wait-ms is 90 
 msec or 15 minutes, which is way too high.  The default container expiry time 
 from the RM and the default task timeout in MapReduce are both only 10 
 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-64) Add cluster-level stats availabe via RPCs

2015-05-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-64?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534025#comment-14534025
 ] 

Sunil G commented on YARN-64:
-

Hi [~vinodkv]
Now we have cluster stats in UI and REST. Do we need to have in command 
line/rpc level too.
 

 Add cluster-level stats availabe via RPCs
 -

 Key: YARN-64
 URL: https://issues.apache.org/jira/browse/YARN-64
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Vinod Kumar Vavilapalli
Assignee: Ravi Teja Ch N V

 MAPREDUCE-2738 already added the stats to the UI. It'll be helpful to add 
 them to YarnClusterMetrics and make them available via the command-line/RPC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3591) Resource Localisation on a bad disk causes subsequent containers failure

2015-05-08 Thread Lavkesh Lahngir (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lavkesh Lahngir updated YARN-3591:
--
Attachment: 0001-YARN-3591.patch.1

 Resource Localisation on a bad disk causes subsequent containers failure 
 -

 Key: YARN-3591
 URL: https://issues.apache.org/jira/browse/YARN-3591
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lavkesh Lahngir
 Attachments: 0001-YARN-3591.patch, 0001-YARN-3591.patch.1


 It happens when a resource is localised on the disk, after localising that 
 disk has gone bad. NM keeps paths for localised resources in memory.  At the 
 time of resource request isResourcePresent(rsrc) will be called which calls 
 file.exists() on the localised path.
 In some cases when disk has gone bad, inodes are stilled cached and 
 file.exists() returns true. But at the time of reading, file will not open.
 Note: file.exists() actually calls stat64 natively which returns true because 
 it was able to find inode information from the OS.
 A proposal is to call file.list() on the parent path of the resource, which 
 will call open() natively. If the disk is good it should return an array of 
 paths with length at-least 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3589) RM and AH web UI display DOCTYPE wrongly

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-3589:
-
Summary: RM and AH web UI display DOCTYPE wrongly  (was: RM and AH web UI 
display DOCTYPE)

 RM and AH web UI display DOCTYPE wrongly
 

 Key: YARN-3589
 URL: https://issues.apache.org/jira/browse/YARN-3589
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.8.0
Reporter: Rohith
Assignee: Rohith
  Labels: BB2015-05-RFC
 Attachments: 0001-YARN-3589.patch, YARN-3589.PNG


 RM web app UI display {{!DOCTYPE html PUBLIC -\/\/W3C\/\/DTD HTML 
 4.01\/\/EN http:\/\/www.w3.org\/TR\/html4\/strict.dtd}} which is not 
 necessary.
 This is because, content of html page is escaped which result browser cant 
 not parse it. Any content which is escaped should be with the HTML block , 
 but doc type is above html which browser can't parse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3599) Fix the javadoc of DelegationTokenSecretManager in hadoop-yarn

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534097#comment-14534097
 ] 

Tsuyoshi Ozawa commented on YARN-3599:
--

[~gliptak], thank you for taking this issue. Looks good to me overall. Could 
you check following points?

Please check indentation of line of in milliseconds - it seems to have 
additional space.
{code}
  * @param delegationTokenRenewInterval how often the tokens must be renewed
+ * in milliseconds
{code}

A following line is over 80 characters. 
{code}
+ * @param delegationTokenRemoverScanInterval how often the tokens are 
scanned
{code}


The link looks to be broken. Should we just remove link annotation?
{code}
+   * @param rmContext current context of the {@link ResourceManager}
{code}



 Fix the javadoc of DelegationTokenSecretManager in hadoop-yarn
 --

 Key: YARN-3599
 URL: https://issues.apache.org/jira/browse/YARN-3599
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: documentation
Reporter: Gabor Liptak
Priority: Trivial
 Attachments: YARN-3599.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) drop the useless yarn overview document

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534122#comment-14534122
 ] 

Tsuyoshi Ozawa commented on YARN-3169:
--

[~ajisakaa] do you know how to edit left menu of documentation? [~brahmareddy] 
asked me it offline.

 drop the useless yarn overview document
 ---

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3169-002.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3170) YARN architecture document needs updating

2015-05-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated YARN-3170:
---
Attachment: YARN-3170-002.patch

 YARN architecture document needs updating
 -

 Key: YARN-3170
 URL: https://issues.apache.org/jira/browse/YARN-3170
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3170-002.patch, YARN-3170.patch


 The marketing paragraph at the top, NextGen MapReduce, etc are all 
 marketing rather than actual descriptions. It also needs some general 
 updates, esp given it reads as though 0.23 was just released yesterday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3169) drop the useless yarn overview document

2015-05-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-3169:

Labels: BB2015-05-RFC  (was: BB2015-05-TBR)

 drop the useless yarn overview document
 ---

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2784) Make POM project names consistent

2015-05-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-2784:
-
Attachment: YARN-2784-branch-2.patch

Updated the patch for branch-2

 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0002-YARN-2784.patch, YARN-2784-branch-2.patch, 
 YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3169) Drop YARN's overview document

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-3169:
-
Hadoop Flags: Reviewed

 Drop YARN's overview document
 -

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3513) Remove unused variables in ContainersMonitorImpl and add debug log for overall resource usage by all containers

2015-05-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3513:

Attachment: YARN-3513.20150508-1.patch

Oops induced white spaces, uploading new patch correcting !

 Remove unused variables in ContainersMonitorImpl and add debug log for 
 overall resource usage by all containers 
 

 Key: YARN-3513
 URL: https://issues.apache.org/jira/browse/YARN-3513
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
Priority: Trivial
  Labels: BB2015-05-TBR, newbie
 Attachments: YARN-3513.20150421-1.patch, YARN-3513.20150503-1.patch, 
 YARN-3513.20150506-1.patch, YARN-3513.20150507-1.patch, 
 YARN-3513.20150508-1.patch, YARN-3513.20150508-1.patch


 Some local variables in MonitoringThread.run()  : {{vmemStillInUsage and 
 pmemStillInUsage}} are not used and just updated. 
 Instead we need to add debug log for overall resource usage by all containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3600) AM container link is broken (on a killed application, at least)

2015-05-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3600:

Labels: BB2015-05-RFC  (was: )

 AM container link is broken (on a killed application, at least)
 ---

 Key: YARN-3600
 URL: https://issues.apache.org/jira/browse/YARN-3600
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Sergey Shelukhin
Assignee: Naganarasimha G R
  Labels: BB2015-05-RFC
 Attachments: YARN-3600.20150508-1.patch


 Running some fairly recent (couple weeks ago) version of 2.8.0-SNAPSHOT. 
 I have an application that ran fine for a while and then I yarn kill-ed it. 
 Now when I go to the only app attempt URL (like so: http://(snip RM host 
 name):8088/cluster/appattempt/appattempt_1429683757595_0795_01)
 I see:
 AM Container: container_1429683757595_0795_01_01
 Node: N/A 
 and the container link goes to {noformat}http://(snip RM host 
 name):8088/cluster/N/A
 {noformat}
 which obviously doesn't work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3148) allow CORS related headers to passthrough in WebAppProxyServlet

2015-05-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3148:
---
Attachment: (was: YARN-3148.02.patch)

 allow CORS related headers to passthrough in WebAppProxyServlet
 ---

 Key: YARN-3148
 URL: https://issues.apache.org/jira/browse/YARN-3148
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Prakash Ramachandran
Assignee: Varun Saxena
  Labels: BB2015-05-RFC
 Attachments: YARN-3148.001.patch


 currently the WebAppProxyServlet filters the request headers as defined by  
 passThroughHeaders. Tez UI is building a webapp which using rest api to fetch 
 data from the am via the rm tracking url. 
 for this purpose it would be nice to have additional headers allowed 
 especially the ones related to CORS. A few of them that would help are 
 * Origin
 * Access-Control-Request-Method
 * Access-Control-Request-Headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) Drop YARN's overview document

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534322#comment-14534322
 ] 

Tsuyoshi Ozawa commented on YARN-3169:
--

Thanks! Committing this.

 Drop YARN's overview document
 -

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3432) Cluster metrics have wrong Total Memory when there is reserved memory on CS

2015-05-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534361#comment-14534361
 ] 

Akira AJISAKA commented on YARN-3432:
-

I'm thinking changing how to calculate total memory when the scheduler is 
CapacityScheduler is straightforward.
{code}
  if (rs instanceof CapacityScheduler) {
this.totalMB = availableMB + allocatedMB + reservedMB;
  } else {
this.totalMB = availableMB + allocatedMB;
  }
{code}
Hi [~tgraves] and [~brahmareddy], what do you think?

 Cluster metrics have wrong Total Memory when there is reserved memory on CS
 ---

 Key: YARN-3432
 URL: https://issues.apache.org/jira/browse/YARN-3432
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Affects Versions: 2.6.0
Reporter: Thomas Graves
Assignee: Brahma Reddy Battula

 I noticed that when reservations happen when using the Capacity Scheduler, 
 the UI and web services report the wrong total memory.
 For example.  I have a 300GB of total memory in my cluster.  I allocate 50 
 and I reserve 10.  The cluster metrics for total memory get reported as 290GB.
 This was broken by https://issues.apache.org/jira/browse/YARN-656 so perhaps 
 there is a difference between fair scheduler and capacity scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2784) Make POM project names consistent

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534212#comment-14534212
 ] 

Tsuyoshi Ozawa commented on YARN-2784:
--

+1. [~devaraj.k] please commit it after Jenkins' CI.

 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0002-YARN-2784.patch, YARN-2784-branch-2.patch, 
 YARN-2784-branch-2.patch, YARN-2784.patch, YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-1432) Reduce phase is failing with shuffle error in kerberos enabled cluster

2015-05-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith resolved YARN-1432.
--
Resolution: Not A Problem

Hi [~ramgopalnaali] For the different users to run in the Kerberos set up, 
LinuxContainerExecutor should be used. For more detailed security set up , 
kindly refer [Hadoop In Security Mode | 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html]

 Reduce phase is failing with shuffle error in kerberos enabled cluster
 --

 Key: YARN-1432
 URL: https://issues.apache.org/jira/browse/YARN-1432
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramgopal N
  Labels: security

 {code}
 OS user: user3
 kerberos user: hdfs
 Reducer is trying to read the map intermediate output using kerberos 
 user(hdfs),but the owner of this file is OS user(user3)
 2013-11-21 20:35:48,169 ERROR org.apache.hadoop.mapred.ShuffleHandler: 
 Shuffle error :
 java.io.IOException: Error Reading IndexFile
   at 
 org.apache.hadoop.mapred.IndexCache.readIndexFileToCache(IndexCache.java:123)
   at 
 org.apache.hadoop.mapred.IndexCache.getIndexInformation(IndexCache.java:68)
   at 
 org.apache.hadoop.mapred.ShuffleHandler$Shuffle.sendMapOutput(ShuffleHandler.java:595)
   at 
 org.apache.hadoop.mapred.ShuffleHandler$Shuffle.messageReceived(ShuffleHandler.java:506)
   at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:754)
   at 
 org.jboss.netty.handler.stream.ChunkedWriteHandler.handleUpstream(ChunkedWriteHandler.java:144)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:754)
   at 
 org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:99)
   at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:754)
   at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302)
   at 
 org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndfireMessageReceived(ReplayingDecoder.java:523)
   at 
 org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:507)
   at 
 org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:444)
   at 
 org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)
   at 
 org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:540)
   at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)
   at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)
   at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350)
   at 
 org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
   at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
   at 
 org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
   at 
 org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: Owner 'user3' for path 
 /home/user3/NodeAgentTmpDir/data/mapred/nm-local-dir/usercache/hdfs/appcache/application_1385040658134_0011/output/attempt_1385040658134_0011_m_00_0/file.out.index
  did not match expected owner 'hdfs'
   at org.apache.hadoop.io.SecureIOUtils.checkStat(SecureIOUtils.java:285)
   at 
 org.apache.hadoop.io.SecureIOUtils.forceSecureOpenFSDataInputStream(SecureIOUtils.java:174)
   at 
 org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtils.java:158)
   at org.apache.hadoop.mapred.SpillRecord.init(SpillRecord.java:70)
   at 

[jira] [Commented] (YARN-3592) Fix typos in RMNodeLabelsManager

2015-05-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534259#comment-14534259
 ] 

Sunil G commented on YARN-3592:
---

Thank You [~devaraj.k] for commiting the same, and thank you Junping Du.

 Fix typos in RMNodeLabelsManager
 

 Key: YARN-3592
 URL: https://issues.apache.org/jira/browse/YARN-3592
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Junping Du
Assignee: Sunil G
  Labels: newbie
 Fix For: 2.8.0

 Attachments: 0001-YARN-3592.patch


 acccessibleNodeLabels = accessibleNodeLabels in many places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1263) Clean up unused imports in TestFairScheduler after YARN-899

2015-05-08 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534268#comment-14534268
 ] 

Rohith commented on YARN-1263:
--

[~xgong] I see TestFairScheduler class, only 1 used import is there.. Does this 
JIRA still to be open?

 Clean up unused imports in TestFairScheduler after YARN-899
 ---

 Key: YARN-1263
 URL: https://issues.apache.org/jira/browse/YARN-1263
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Reporter: Sandy Ryza
Assignee: Xuan Gong

 YARN-899 added a bunch of unused imports to TestFairScheduler.  It might be 
 useful to check to see whether it added these in other files as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-3189) Yarn application usage command should not give -appstate and -apptype

2015-05-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R resolved YARN-3189.
-
Resolution: Won't Fix
  Assignee: (was: Naganarasimha G R)

Hi [~anushri],
Generic Options Parser does not support the way your are expecting, so closing 
this issue. If you still feel we can solve please reopen

 Yarn application usage command should not give -appstate and -apptype
 -

 Key: YARN-3189
 URL: https://issues.apache.org/jira/browse/YARN-3189
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Anushri
Priority: Minor
 Attachments: YARN-3189.patch, YARN-3189.patch, YARN-3189_1.patch


 Yarn application usage command should not give -appstate and -apptype since 
 these two are applicable to --list command..
  *Can somebody please assign this issue to me* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2821) Distributed shell app master becomes unresponsive sometimes

2015-05-08 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-2821:

Attachment: YARN-2821.003.patch

Uploaded YARN-2821.003.patch which fixes the failing tests and addresses some 
checkstyle complaints.

 Distributed shell app master becomes unresponsive sometimes
 ---

 Key: YARN-2821
 URL: https://issues.apache.org/jira/browse/YARN-2821
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Affects Versions: 2.5.1
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: YARN-2821.002.patch, YARN-2821.003.patch, 
 apache-yarn-2821.0.patch, apache-yarn-2821.1.patch


 We've noticed that once in a while the distributed shell app master becomes 
 unresponsive and is eventually killed by the RM. snippet of the logs -
 {noformat}
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: 
 appattempt_1415123350094_0017_01 received 0 previous attempts' running 
 containers on AM registration.
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:38 INFO impl.AMRMClientImpl: Received new token for : 
 onprem-tez2:45454
 14/11/04 18:21:38 INFO distributedshell.ApplicationMaster: Got response from 
 RM for container ask, allocatedCnt=1
 14/11/04 18:21:38 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_02, 
 containerNode=onprem-tez2:45454, containerNodeURI=onprem-tez2:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:38 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_02
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 START_CONTAINER for Container container_1415123350094_0017_01_02
 14/11/04 18:21:39 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
 onprem-tez2:45454
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 QUERY_CONTAINER for Container container_1415123350094_0017_01_02
 14/11/04 18:21:39 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
 onprem-tez2:45454
 14/11/04 18:21:39 INFO impl.AMRMClientImpl: Received new token for : 
 onprem-tez3:45454
 14/11/04 18:21:39 INFO impl.AMRMClientImpl: Received new token for : 
 onprem-tez4:45454
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Got response from 
 RM for container ask, allocatedCnt=3
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_03, 
 containerNode=onprem-tez2:45454, containerNodeURI=onprem-tez2:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_04, 
 containerNode=onprem-tez3:45454, containerNodeURI=onprem-tez3:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_05, 
 containerNode=onprem-tez4:45454, containerNodeURI=onprem-tez4:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_03
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_05
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_04
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 START_CONTAINER for Container container_1415123350094_0017_01_05
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 START_CONTAINER for Container container_1415123350094_0017_01_03
 14/11/04 18:21:39 INFO 

[jira] [Commented] (YARN-3169) Drop YARN's overview document

2015-05-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534317#comment-14534317
 ] 

Akira AJISAKA commented on YARN-3169:
-

Copied from 
https://builds.apache.org/job/PreCommit-YARN-Build/7795/artifact/patchprocess/commentfile
\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   2m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 21s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   2m 56s | Site still builds. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   6m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731407/YARN-3169-003.patch |
| Optional Tests | site |
| git revision | trunk / 4c6816f |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7795/console |


This message was automatically generated.


 Drop YARN's overview document
 -

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-1196) LocalDirsHandlerService never change failedDirs back to normal even when these disks turn good

2015-05-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith resolved YARN-1196.
--
Resolution: Duplicate

Closing as duplicate.

 LocalDirsHandlerService never change failedDirs back to normal even when 
 these disks turn good
 --

 Key: YARN-1196
 URL: https://issues.apache.org/jira/browse/YARN-1196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.1.1-beta
Reporter: Nemon Lou

 A simple way to reproduce it:
 1,change access mode of one node manager's local-dirs to 000
 After a few seconds,this node manager will become unhealthy.
 2,change access mode of the node manager's local-dirs back to normal.
 The node manager is still unhealthy with all local-dirs in bad state even 
 after a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3276) Refactor and fix null casting in some map cast for TimelineEntity (old and new) and fix findbug warnings

2015-05-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-3276:
-
Summary: Refactor and fix null casting in some map cast for TimelineEntity 
(old and new) and fix findbug warnings  (was: Refactor and fix null casting in 
some map cast for TimelineEntity (old and new))

 Refactor and fix null casting in some map cast for TimelineEntity (old and 
 new) and fix findbug warnings
 

 Key: YARN-3276
 URL: https://issues.apache.org/jira/browse/YARN-3276
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-3276-YARN-2928.v3.patch, 
 YARN-3276-YARN-2928.v4.patch, YARN-3276-v2.patch, YARN-3276-v3.patch, 
 YARN-3276.patch


 Per discussion in YARN-3087, we need to refactor some similar logic to cast 
 map to hashmap and get rid of NPE issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3276) Refactor and fix null casting in some map cast for TimelineEntity (old and new)

2015-05-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-3276:
-
Attachment: YARN-3276-YARN-2928.v4.patch

Update patch to address white space issue and check style issues. Also fix 4 
Findbug warnings that on existing YARN-2928 branch.

 Refactor and fix null casting in some map cast for TimelineEntity (old and 
 new)
 ---

 Key: YARN-3276
 URL: https://issues.apache.org/jira/browse/YARN-3276
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-3276-YARN-2928.v3.patch, 
 YARN-3276-YARN-2928.v4.patch, YARN-3276-v2.patch, YARN-3276-v3.patch, 
 YARN-3276.patch


 Per discussion in YARN-3087, we need to refactor some similar logic to cast 
 map to hashmap and get rid of NPE issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3600) AM container link is broken (on a killed application, at least)

2015-05-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3600:

Attachment: YARN-3600.20150508-1.patch

It was failing in both cases when app is completed or killed and also null was 
displayed when AM container  is not launched. Uploading a patch with the 
correction. [~devaraj.k] can you please take a look

 AM container link is broken (on a killed application, at least)
 ---

 Key: YARN-3600
 URL: https://issues.apache.org/jira/browse/YARN-3600
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Sergey Shelukhin
Assignee: Naganarasimha G R
 Attachments: YARN-3600.20150508-1.patch


 Running some fairly recent (couple weeks ago) version of 2.8.0-SNAPSHOT. 
 I have an application that ran fine for a while and then I yarn kill-ed it. 
 Now when I go to the only app attempt URL (like so: http://(snip RM host 
 name):8088/cluster/appattempt/appattempt_1429683757595_0795_01)
 I see:
 AM Container: container_1429683757595_0795_01_01
 Node: N/A 
 and the container link goes to {noformat}http://(snip RM host 
 name):8088/cluster/N/A
 {noformat}
 which obviously doesn't work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1252) Secure RM fails to start up in secure HA setup with Renewal request for unknown token exception

2015-05-08 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534271#comment-14534271
 ] 

Rohith commented on YARN-1252:
--

As there no activities for long time and as per above analysis the should be 
solved in YARN-674. Closing the issue as duplicate. Feel free to reopen in case 
the issue still exists.

 Secure RM fails to start up in secure HA setup with Renewal request for 
 unknown token exception
 ---

 Key: YARN-1252
 URL: https://issues.apache.org/jira/browse/YARN-1252
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi

 {code}
 2013-09-26 08:15:20,507 INFO  ipc.Server (Server.java:run(861)) - IPC Server 
 Responder: starting
 2013-09-26 08:15:20,521 ERROR security.UserGroupInformation 
 (UserGroupInformation.java:doAs(1486)) - PriviledgedActionException 
 as:rm/host@realm (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Renewal 
 request for unknown token
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:388)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewDelegationToken(FSNamesystem.java:5934)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewDelegationToken(NameNodeRpcServer.java:453)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59650)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1483)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2784) Make POM project names consistent

2015-05-08 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534272#comment-14534272
 ] 

Devaraj K commented on YARN-2784:
-

+1, looks good to me, pending Jenkins!.

 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: 0002-YARN-2784.patch, YARN-2784-branch-2.patch, 
 YARN-2784-branch-2.patch, YARN-2784.patch, YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3593) Modifications in Node Labels Page for Partitions

2015-05-08 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534283#comment-14534283
 ] 

Naganarasimha G R commented on YARN-3593:
-

Hi [~wangda],
Can you please take a look @ this.

 Modifications in Node Labels Page for Partitions
 

 Key: YARN-3593
 URL: https://issues.apache.org/jira/browse/YARN-3593
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: webapp
Affects Versions: 2.6.0, 2.7.0
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
 Attachments: NodeLabelsPageModifications.png, 
 YARN-3593.20150507-1.patch


 Need to support displaying the label type in Node Labels page and also 
 instead of NO_LABEL we need to show as DEFAULT_PARTITION 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) Drop YARN's overview document

2015-05-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534324#comment-14534324
 ] 

Akira AJISAKA commented on YARN-3169:
-

Thanks Tsuyoshi.

 Drop YARN's overview document
 -

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1329) yarn-config.sh overwrites YARN_CONF_DIR indiscriminately

2015-05-08 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534350#comment-14534350
 ] 

Rohith commented on YARN-1329:
--

Is the issue still exist in trunk? I don't see this in trunk. Am I missing 
anything?

 yarn-config.sh overwrites YARN_CONF_DIR indiscriminately 
 -

 Key: YARN-1329
 URL: https://issues.apache.org/jira/browse/YARN-1329
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager
Reporter: Aaron Gottlieb
Assignee: haosdent
  Labels: BB2015-05-TBR, easyfix
 Attachments: YARN-1329.patch


 The script yarn-daemons.sh calls 
 {code}${HADOOP_LIBEXEC_DIR}/yarn-config.sh{code}
 yarn-config.sh overwrites any previously set value of environment variable 
 YARN_CONF_DIR starting at line 40:
 {code:title=yarn-config.sh|borderStyle=solid}
 #check to see if the conf dir is given as an optional argument
 if [ $# -gt 1 ]
 then
 if [ --config = $1 ]
 then
 shift
 confdir=$1
 shift
 YARN_CONF_DIR=$confdir
 fi
 fi
  
 # Allow alternate conf dir location.
 export YARN_CONF_DIR=${HADOOP_CONF_DIR:-$HADOOP_YARN_HOME/conf}
 {code}
 The last line should check for the existence of YARN_CONF_DIR first.
 {code}
 DEFAULT_CONF_DIR=${HADOOP_CONF_DIR:-$YARN_HOME/conf}
 export YARN_CONF_DIR=${YARN_CONF_DIR:-$DEFAULT_CONF_DIR}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3169) Drop YARN's overview document

2015-05-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-3169:
-
Summary: Drop YARN's overview document  (was: drop the useless yarn 
overview document)

 Drop YARN's overview document
 -

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3148) allow CORS related headers to passthrough in WebAppProxyServlet

2015-05-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3148:
---
Attachment: YARN-3148.02.patch

 allow CORS related headers to passthrough in WebAppProxyServlet
 ---

 Key: YARN-3148
 URL: https://issues.apache.org/jira/browse/YARN-3148
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Prakash Ramachandran
Assignee: Varun Saxena
 Attachments: YARN-3148.001.patch, YARN-3148.02.patch


 currently the WebAppProxyServlet filters the request headers as defined by  
 passThroughHeaders. Tez UI is building a webapp which using rest api to fetch 
 data from the am via the rm tracking url. 
 for this purpose it would be nice to have additional headers allowed 
 especially the ones related to CORS. A few of them that would help are 
 * Origin
 * Access-Control-Request-Method
 * Access-Control-Request-Headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3489) RMServerUtils.validateResourceRequests should only obtain queue info once

2015-05-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534238#comment-14534238
 ] 

Varun Saxena commented on YARN-3489:


[~vinodkv], as the current API is called in places where there is only one 
resource request, hence queue info need not be passed.
Some sort of API change will be required to pass the queue info only once for a 
list of resource requests. Thoughts ?



 RMServerUtils.validateResourceRequests should only obtain queue info once
 -

 Key: YARN-3489
 URL: https://issues.apache.org/jira/browse/YARN-3489
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Attachments: YARN-3489.01.patch, YARN-3489.02.patch


 Since the label support was added we now get the queue info for each request 
 being validated in SchedulerUtils.validateResourceRequest.  If 
 validateResourceRequests needs to validate a lot of requests at a time (e.g.: 
 large cluster with lots of varied locality in the requests) then it will get 
 the queue info for each request.  Since we build the queue info this 
 generates a lot of unnecessary garbage, as the queue isn't changing between 
 requests.  We should grab the queue info once and pass it down rather than 
 building it again for each request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1347) QueueMetrics.pendingContainers can become negative

2015-05-08 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534245#comment-14534245
 ] 

Rohith commented on YARN-1347:
--

[~jlowe] Does this issue has occurring latest releases also? If it is not 
coming, can it be resolved?

 QueueMetrics.pendingContainers can become negative
 --

 Key: YARN-1347
 URL: https://issues.apache.org/jira/browse/YARN-1347
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.9
Reporter: Jason Lowe

 One of our 0.23 clusters is currently reporting 
 QueueMetrics.pendingContainers as a negative value.  It's unclear what chain 
 of events led to the metric becoming negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-1252) Secure RM fails to start up in secure HA setup with Renewal request for unknown token exception

2015-05-08 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith resolved YARN-1252.
--
Resolution: Duplicate

 Secure RM fails to start up in secure HA setup with Renewal request for 
 unknown token exception
 ---

 Key: YARN-1252
 URL: https://issues.apache.org/jira/browse/YARN-1252
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.1-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi

 {code}
 2013-09-26 08:15:20,507 INFO  ipc.Server (Server.java:run(861)) - IPC Server 
 Responder: starting
 2013-09-26 08:15:20,521 ERROR security.UserGroupInformation 
 (UserGroupInformation.java:doAs(1486)) - PriviledgedActionException 
 as:rm/host@realm (auth:KERBEROS) 
 cause:org.apache.hadoop.security.token.SecretManager$InvalidToken: Renewal 
 request for unknown token
 at 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:388)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewDelegationToken(FSNamesystem.java:5934)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewDelegationToken(NameNodeRpcServer.java:453)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:851)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59650)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1483)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3148) allow CORS related headers to passthrough in WebAppProxyServlet

2015-05-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3148:
---
Attachment: YARN-3148.02.patch

 allow CORS related headers to passthrough in WebAppProxyServlet
 ---

 Key: YARN-3148
 URL: https://issues.apache.org/jira/browse/YARN-3148
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Prakash Ramachandran
Assignee: Varun Saxena
  Labels: BB2015-05-RFC
 Attachments: YARN-3148.001.patch, YARN-3148.02.patch


 currently the WebAppProxyServlet filters the request headers as defined by  
 passThroughHeaders. Tez UI is building a webapp which using rest api to fetch 
 data from the am via the rm tracking url. 
 for this purpose it would be nice to have additional headers allowed 
 especially the ones related to CORS. A few of them that would help are 
 * Origin
 * Access-Control-Request-Method
 * Access-Control-Request-Headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3134) [Storage implementation] Exploiting the option of using Phoenix to access HBase backend

2015-05-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534291#comment-14534291
 ] 

Junping Du commented on YARN-3134:
--

bq. For now, I'm removing the connection cache to make the first step right. 
I'll change the description of YARN-3595 for the connection cache. 
+1. The plan sounds reasonable.

 [Storage implementation] Exploiting the option of using Phoenix to access 
 HBase backend
 ---

 Key: YARN-3134
 URL: https://issues.apache.org/jira/browse/YARN-3134
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Zhijie Shen
Assignee: Li Lu
 Attachments: SettingupPhoenixstorageforatimelinev2end-to-endtest.pdf, 
 YARN-3134-040915_poc.patch, YARN-3134-041015_poc.patch, 
 YARN-3134-041415_poc.patch, YARN-3134-042115.patch, YARN-3134-042715.patch, 
 YARN-3134-YARN-2928.001.patch, YARN-3134-YARN-2928.002.patch, 
 YARN-3134-YARN-2928.003.patch, YARN-3134-YARN-2928.004.patch, 
 YARN-3134-YARN-2928.005.patch, YARN-3134-YARN-2928.006.patch, 
 YARN-3134DataSchema.pdf, 
 hadoop-zshen-nodemanager-d-128-95-184-84.dhcp4.washington.edu.out


 Quote the introduction on Phoenix web page:
 {code}
 Apache Phoenix is a relational database layer over HBase delivered as a 
 client-embedded JDBC driver targeting low latency queries over HBase data. 
 Apache Phoenix takes your SQL query, compiles it into a series of HBase 
 scans, and orchestrates the running of those scans to produce regular JDBC 
 result sets. The table metadata is stored in an HBase table and versioned, 
 such that snapshot queries over prior versions will automatically use the 
 correct schema. Direct use of the HBase API, along with coprocessors and 
 custom filters, results in performance on the order of milliseconds for small 
 queries, or seconds for tens of millions of rows.
 {code}
 It may simply our implementation read/write data from/to HBase, and can 
 easily build index and compose complex query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2287) Add audit log levels for NM and RM

2015-05-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534307#comment-14534307
 ] 

Varun Saxena commented on YARN-2287:


To ensure that users who need to print these audit logs can do so at DEBUG 
level. In our production setup, we disable these logs as they appear too many 
times

 Add audit log levels for NM and RM
 --

 Key: YARN-2287
 URL: https://issues.apache.org/jira/browse/YARN-2287
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager, resourcemanager
Affects Versions: 2.4.1
Reporter: Varun Saxena
Assignee: Varun Saxena
 Attachments: YARN-2287-patch-1.patch, YARN-2287.patch


 NM and RM audit logging can be done based on log level as some of the audit 
 logs, especially the container audit logs appear too many times. By 
 introducing log level, certain audit logs can be suppressed, if not required 
 in deployment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3358) Audit log not present while refreshing Service ACLs

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534536#comment-14534536
 ] 

Hudson commented on YARN-3358:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/190/])
YARN-3358. Audit log not present while refreshing Service ACLs. (devaraj: rev 
ef3d66d4624d360e75c016e36824a6782d6a9746)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
* hadoop-yarn-project/CHANGES.txt


 Audit log not present while refreshing Service ACLs
 ---

 Key: YARN-3358
 URL: https://issues.apache.org/jira/browse/YARN-3358
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.1

 Attachments: YARN-3358.01.patch


 There should be a success audit log in AdminService#refreshServiceAcls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) Drop YARN's overview document

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534541#comment-14534541
 ] 

Hudson commented on YARN-3169:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/190/])
YARN-3169. Drop YARN's overview document. Contributed by Brahma Reddy Battula. 
(ozawa: rev b419c1b2ec452c2632274e93240b595161fb023b)
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/index.md
* hadoop-project/src/site/site.xml
* hadoop-yarn-project/CHANGES.txt


 Drop YARN's overview document
 -

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1832) Fix wrong MockLocalizerStatus#equals implementation

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534524#comment-14534524
 ] 

Hudson commented on YARN-1832:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/190/])
YARN-1832. Fix wrong MockLocalizerStatus#equals implementation. Contributed by 
Hong Zhiguo. (aajisaka: rev b167fe7605deb29ec533047d79d036eb65328853)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/MockLocalizerStatus.java
* hadoop-yarn-project/CHANGES.txt


 Fix wrong MockLocalizerStatus#equals implementation
 ---

 Key: YARN-1832
 URL: https://issues.apache.org/jira/browse/YARN-1832
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
Reporter: Hong Zhiguo
Assignee: Hong Zhiguo
Priority: Minor
 Fix For: 2.8.0

 Attachments: YARN-1832.patch


 return getLocalizerId().equals(other)  ...; should be
 return getLocalizerId().equals(other. getLocalizerId())  ...;
 getLocalizerId() returns String. It's expected to compare 
 this.getLocalizerId() against other.getLocalizerId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2900) Application (Attempt and Container) Not Found in AHS results in Internal Server Error (500)

2015-05-08 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534559#comment-14534559
 ] 

Mit Desai commented on YARN-2900:
-

I'll update it shortly

 Application (Attempt and Container) Not Found in AHS results in Internal 
 Server Error (500)
 ---

 Key: YARN-2900
 URL: https://issues.apache.org/jira/browse/YARN-2900
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Attachments: YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch


 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.convertToApplicationReport(ApplicationHistoryManagerImpl.java:128)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.getApplication(ApplicationHistoryManagerImpl.java:118)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:222)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:219)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices.getApp(WebServices.java:218)
   ... 59 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3587) Fix the javadoc of DelegationTokenSecretManager in yarn project

2015-05-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534625#comment-14534625
 ] 

Junping Du commented on YARN-3587:
--

I think there is typically 3 reasons we would like to split the patch into 
different projects:
- We would like to see Jenkins result against different projects (HDFS, 
MAPREDUCE, YARN). 
- Committers actively on different projects can help to review different part 
of the code.
- The change, if involved with any bug, can be easily traced out through the 
patch history.
For small patch like this case, I don't see any necessary to split it into 
different sub projects. [~vinodkv], [~ajisakaa] and [~gliptak], what do you 
think?


 Fix the javadoc of DelegationTokenSecretManager in yarn project
 ---

 Key: YARN-3587
 URL: https://issues.apache.org/jira/browse/YARN-3587
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Gabor Liptak
Priority: Minor
  Labels: newbie

 In RMDelegationTokenSecretManager and TimelineDelegationTokenSecretManager,  
 the javadoc of the constructor is as follows:
 {code}
   /**
* Create a secret manager
* @param delegationKeyUpdateInterval the number of seconds for rolling new
*secret keys.
* @param delegationTokenMaxLifetime the maximum lifetime of the delegation
*tokens
* @param delegationTokenRenewInterval how often the tokens must be renewed
* @param delegationTokenRemoverScanInterval how often the tokens are 
 scanned
*for expired tokens
*/
 {code}
 1. the number of seconds should be the number of milliseconds.
 2. It's better to add time unit to the description of other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3601) Fix UT TestRMFailover.testRMWebAppRedirect

2015-05-08 Thread Yang Weiwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Weiwei updated YARN-3601:
--
Description: This test case was not working since the commit from 
YARN-2605. It failed with NPE exception.  (was: This test case was not working 
since the commit from YARN-2605. )

 Fix UT TestRMFailover.testRMWebAppRedirect
 --

 Key: YARN-3601
 URL: https://issues.apache.org/jira/browse/YARN-3601
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
 Environment: Red Hat Enterprise Linux Workstation release 6.5 
 (Santiago)
Reporter: Yang Weiwei
Assignee: Yang Weiwei
Priority: Critical
  Labels: BB2015-05-TBR, test

 This test case was not working since the commit from YARN-2605. It failed 
 with NPE exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-1347) QueueMetrics.pendingContainers can become negative

2015-05-08 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved YARN-1347.
--
Resolution: Cannot Reproduce

I am no longer seeing this on Hadoop 2.5 and later releases.

 QueueMetrics.pendingContainers can become negative
 --

 Key: YARN-1347
 URL: https://issues.apache.org/jira/browse/YARN-1347
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.9
Reporter: Jason Lowe

 One of our 0.23 clusters is currently reporting 
 QueueMetrics.pendingContainers as a negative value.  It's unclear what chain 
 of events led to the metric becoming negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3513) Remove unused variables in ContainersMonitorImpl and add debug log for overall resource usage by all containers

2015-05-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3513:

Issue Type: Improvement  (was: Bug)

 Remove unused variables in ContainersMonitorImpl and add debug log for 
 overall resource usage by all containers 
 

 Key: YARN-3513
 URL: https://issues.apache.org/jira/browse/YARN-3513
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
Priority: Trivial
  Labels: BB2015-05-TBR, newbie
 Attachments: YARN-3513.20150421-1.patch, YARN-3513.20150503-1.patch, 
 YARN-3513.20150506-1.patch, YARN-3513.20150507-1.patch, 
 YARN-3513.20150508-1.patch, YARN-3513.20150508-1.patch


 Some local variables in MonitoringThread.run()  : {{vmemStillInUsage and 
 pmemStillInUsage}} are not used and just updated. 
 Instead we need to add debug log for overall resource usage by all containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3148) allow CORS related headers to passthrough in WebAppProxyServlet

2015-05-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534706#comment-14534706
 ] 

Varun Saxena commented on YARN-3148:


Yes. It doesnt happen for response headers

 allow CORS related headers to passthrough in WebAppProxyServlet
 ---

 Key: YARN-3148
 URL: https://issues.apache.org/jira/browse/YARN-3148
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Prakash Ramachandran
Assignee: Varun Saxena
  Labels: BB2015-05-RFC
 Attachments: YARN-3148.001.patch, YARN-3148.02.patch


 currently the WebAppProxyServlet filters the request headers as defined by  
 passThroughHeaders. Tez UI is building a webapp which using rest api to fetch 
 data from the am via the rm tracking url. 
 for this purpose it would be nice to have additional headers allowed 
 especially the ones related to CORS. A few of them that would help are 
 * Origin
 * Access-Control-Request-Method
 * Access-Control-Request-Headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3572) Correct typos in WritingYarnApplications.md

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534515#comment-14534515
 ] 

Hudson commented on YARN-3572:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #190 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/190/])
YARN-3572. Correct typos in WritingYarnApplications.md. Contributed by Gabor 
Liptak. (aajisaka: rev a521b509551e092dfeb38cdf29bb96556d3e0266)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
* hadoop-yarn-project/CHANGES.txt


 Correct typos in WritingYarnApplications.md
 ---

 Key: YARN-3572
 URL: https://issues.apache.org/jira/browse/YARN-3572
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Sandeep Khurana
Assignee: Gabor Liptak
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: YARN-3572.patch


 The documentation at 
 http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html#Sample_Code
  has couple of issues
 In FAQ section 
 Url packageUrl = ConverterUtils.getYarnUrlFromPath(
 FileContext.getFileContext.makeQualified(new Path(packagePath)));
 1) Url packageUrl should be changed to URL packageUrl 
 2)  FileContext.getFileContext should be  FileContext.getFileContext()  
 (brackets at the end) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3170) YARN architecture document needs updating

2015-05-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated YARN-3170:
---
Attachment: YARN-3170-003.patch

 YARN architecture document needs updating
 -

 Key: YARN-3170
 URL: https://issues.apache.org/jira/browse/YARN-3170
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3170-002.patch, YARN-3170-003.patch, YARN-3170.patch


 The marketing paragraph at the top, NextGen MapReduce, etc are all 
 marketing rather than actual descriptions. It also needs some general 
 updates, esp given it reads as though 0.23 was just released yesterday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3513) Remove unused variables in ContainersMonitorImpl and add debug log for overall resource usage by all containers

2015-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534609#comment-14534609
 ] 

Hadoop QA commented on YARN-3513:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 46s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  1s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 47s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 49s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 38s | The applied patch generated  1 
new checkstyle issues (total was 27, now 27). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | yarn tests |   6m 11s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| | |  42m 49s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731425/YARN-3513.20150508-1.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7b1ea9c |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/7803/artifact/patchprocess/diffcheckstylehadoop-yarn-server-nodemanager.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7803/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/7803/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7803/console |


This message was automatically generated.

 Remove unused variables in ContainersMonitorImpl and add debug log for 
 overall resource usage by all containers 
 

 Key: YARN-3513
 URL: https://issues.apache.org/jira/browse/YARN-3513
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
Priority: Trivial
  Labels: BB2015-05-TBR, newbie
 Attachments: YARN-3513.20150421-1.patch, YARN-3513.20150503-1.patch, 
 YARN-3513.20150506-1.patch, YARN-3513.20150507-1.patch, 
 YARN-3513.20150508-1.patch, YARN-3513.20150508-1.patch


 Some local variables in MonitoringThread.run()  : {{vmemStillInUsage and 
 pmemStillInUsage}} are not used and just updated. 
 Instead we need to add debug log for overall resource usage by all containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3601) Fix UT TestRMFailover.testRMWebAppRedirect

2015-05-08 Thread Yang Weiwei (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534638#comment-14534638
 ] 

Yang Weiwei commented on YARN-3601:
---

YARN-2605 changes the response header info in RMWebAppFilter, it used to use 
Refresh but now it is Location. Need to revise the test case to provide test 
coverage for webapp redirection.

 Fix UT TestRMFailover.testRMWebAppRedirect
 --

 Key: YARN-3601
 URL: https://issues.apache.org/jira/browse/YARN-3601
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
 Environment: Red Hat Enterprise Linux Workstation release 6.5 
 (Santiago)
Reporter: Yang Weiwei
Assignee: Yang Weiwei
Priority: Critical
  Labels: BB2015-05-TBR, test

 This test case was not working since the commit from YARN-2605. It failed 
 with NPE exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2821) Distributed shell app master becomes unresponsive sometimes

2015-05-08 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534637#comment-14534637
 ] 

Varun Vasudev commented on YARN-2821:
-

[~jianhe] - can you please review? Thanks!

 Distributed shell app master becomes unresponsive sometimes
 ---

 Key: YARN-2821
 URL: https://issues.apache.org/jira/browse/YARN-2821
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Affects Versions: 2.5.1
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: YARN-2821.002.patch, YARN-2821.003.patch, 
 apache-yarn-2821.0.patch, apache-yarn-2821.1.patch


 We've noticed that once in a while the distributed shell app master becomes 
 unresponsive and is eventually killed by the RM. snippet of the logs -
 {noformat}
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: 
 appattempt_1415123350094_0017_01 received 0 previous attempts' running 
 containers on AM registration.
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:37 INFO distributedshell.ApplicationMaster: Requested 
 container ask: Capability[memory:10, vCores:1]Priority[0]
 14/11/04 18:21:38 INFO impl.AMRMClientImpl: Received new token for : 
 onprem-tez2:45454
 14/11/04 18:21:38 INFO distributedshell.ApplicationMaster: Got response from 
 RM for container ask, allocatedCnt=1
 14/11/04 18:21:38 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_02, 
 containerNode=onprem-tez2:45454, containerNodeURI=onprem-tez2:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:38 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_02
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 START_CONTAINER for Container container_1415123350094_0017_01_02
 14/11/04 18:21:39 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
 onprem-tez2:45454
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 QUERY_CONTAINER for Container container_1415123350094_0017_01_02
 14/11/04 18:21:39 INFO impl.ContainerManagementProtocolProxy: Opening proxy : 
 onprem-tez2:45454
 14/11/04 18:21:39 INFO impl.AMRMClientImpl: Received new token for : 
 onprem-tez3:45454
 14/11/04 18:21:39 INFO impl.AMRMClientImpl: Received new token for : 
 onprem-tez4:45454
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Got response from 
 RM for container ask, allocatedCnt=3
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_03, 
 containerNode=onprem-tez2:45454, containerNodeURI=onprem-tez2:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_04, 
 containerNode=onprem-tez3:45454, containerNodeURI=onprem-tez3:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Launching shell 
 command on a new container., 
 containerId=container_1415123350094_0017_01_05, 
 containerNode=onprem-tez4:45454, containerNodeURI=onprem-tez4:50060, 
 containerResourceMemory1024, containerResourceVirtualCores1
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_03
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_05
 14/11/04 18:21:39 INFO distributedshell.ApplicationMaster: Setting up 
 container launch container for 
 containerid=container_1415123350094_0017_01_04
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 START_CONTAINER for Container container_1415123350094_0017_01_05
 14/11/04 18:21:39 INFO impl.NMClientAsyncImpl: Processing Event EventType: 
 START_CONTAINER for Container container_1415123350094_0017_01_03
 14/11/04 18:21:39 INFO impl.ContainerManagementProtocolProxy: Opening 

[jira] [Commented] (YARN-3554) Default value for maximum nodemanager connect wait time is too high

2015-05-08 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534710#comment-14534710
 ] 

Naganarasimha G R commented on YARN-3554:
-

Thanks for reviewing and committing the patch [~jlowe], [~vinodkv]  
[~gtCarrera9] 

 Default value for maximum nodemanager connect wait time is too high
 ---

 Key: YARN-3554
 URL: https://issues.apache.org/jira/browse/YARN-3554
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Naganarasimha G R
  Labels: BB2015-05-RFC, newbie
 Fix For: 2.7.1

 Attachments: YARN-3554-20150429-2.patch, YARN-3554.20150429-1.patch


 The default value for yarn.client.nodemanager-connect.max-wait-ms is 90 
 msec or 15 minutes, which is way too high.  The default container expiry time 
 from the RM and the default task timeout in MapReduce are both only 10 
 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2784) Make POM project names consistent

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534729#comment-14534729
 ] 

Hudson commented on YARN-2784:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7775 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7775/])
YARN-2784. Make POM project names consistent. Contributed by Rohith. (devaraj: 
rev 241a72af0dd19040be333d77749f8be17b8aafc7)
* hadoop-yarn-project/hadoop-yarn/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* hadoop-yarn-project/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
* hadoop-yarn-project/CHANGES.txt
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml


 Make POM project names consistent
 -

 Key: YARN-2784
 URL: https://issues.apache.org/jira/browse/YARN-2784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: build
Reporter: Rohith
Assignee: Rohith
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: 0002-YARN-2784.patch, YARN-2784-branch-2.patch, 
 YARN-2784-branch-2.patch, YARN-2784.patch, YARN-2784.patch, YARN-2784.patch


 All yarn and mapreduce pom.xml has project name has 
 hadoop-mapreduce/hadoop-yarn. This can be made consistent acros Hadoop 
 projects build like 'Apache Hadoop Yarn module-name' and 'Apache Hadoop 
 MapReduce module-name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3513) Remove unused variables in ContainersMonitorImpl and add debug log for overall resource usage by all containers

2015-05-08 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534704#comment-14534704
 ] 

Naganarasimha G R commented on YARN-3513:
-

[~devaraj.k]/[~djp], can any one of you take a look at it now as valid check 
style and whitespace issues has been fixed.

 Remove unused variables in ContainersMonitorImpl and add debug log for 
 overall resource usage by all containers 
 

 Key: YARN-3513
 URL: https://issues.apache.org/jira/browse/YARN-3513
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
Priority: Trivial
  Labels: BB2015-05-TBR, newbie
 Attachments: YARN-3513.20150421-1.patch, YARN-3513.20150503-1.patch, 
 YARN-3513.20150506-1.patch, YARN-3513.20150507-1.patch, 
 YARN-3513.20150508-1.patch, YARN-3513.20150508-1.patch


 Some local variables in MonitoringThread.run()  : {{vmemStillInUsage and 
 pmemStillInUsage}} are not used and just updated. 
 Instead we need to add debug log for overall resource usage by all containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2900) Application (Attempt and Container) Not Found in AHS results in Internal Server Error (500)

2015-05-08 Thread Mit Desai (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534563#comment-14534563
 ] 

Mit Desai commented on YARN-2900:
-

ApplicationHistoryClientService got changed recently. So this patch needs to be 
reworked.

 Application (Attempt and Container) Not Found in AHS results in Internal 
 Server Error (500)
 ---

 Key: YARN-2900
 URL: https://issues.apache.org/jira/browse/YARN-2900
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Jonathan Eagles
Assignee: Mit Desai
 Attachments: YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, YARN-2900.patch, 
 YARN-2900.patch


 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.convertToApplicationReport(ApplicationHistoryManagerImpl.java:128)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerImpl.getApplication(ApplicationHistoryManagerImpl.java:118)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:222)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices$2.run(WebServices.java:219)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
   at 
 org.apache.hadoop.yarn.server.webapp.WebServices.getApp(WebServices.java:218)
   ... 59 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3148) allow CORS related headers to passthrough in WebAppProxyServlet

2015-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534578#comment-14534578
 ] 

Hadoop QA commented on YARN-3148:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731429/YARN-3148.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 241a72a |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7805/console |


This message was automatically generated.

 allow CORS related headers to passthrough in WebAppProxyServlet
 ---

 Key: YARN-3148
 URL: https://issues.apache.org/jira/browse/YARN-3148
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Prakash Ramachandran
Assignee: Varun Saxena
  Labels: BB2015-05-RFC
 Attachments: YARN-3148.001.patch, YARN-3148.02.patch


 currently the WebAppProxyServlet filters the request headers as defined by  
 passThroughHeaders. Tez UI is building a webapp which using rest api to fetch 
 data from the am via the rm tracking url. 
 for this purpose it would be nice to have additional headers allowed 
 especially the ones related to CORS. A few of them that would help are 
 * Origin
 * Access-Control-Request-Method
 * Access-Control-Request-Headers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3526) ApplicationMaster tracking URL is incorrectly redirected on a QJM cluster

2015-05-08 Thread Yang Weiwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Weiwei updated YARN-3526:
--
Attachment: YARN-3526.001.patch

 ApplicationMaster tracking URL is incorrectly redirected on a QJM cluster
 -

 Key: YARN-3526
 URL: https://issues.apache.org/jira/browse/YARN-3526
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 2.6.0
 Environment: Red Hat Enterprise Linux Server 6.4 
Reporter: Yang Weiwei
Assignee: Yang Weiwei
  Labels: BB2015-05-TBR
 Attachments: YARN-3526.001.patch


 On a QJM HA cluster, view RM web UI to track job status, it shows
 This is standby RM. Redirecting to the current active RM: 
 http://active-RM:8088/proxy/application_1427338037905_0008/mapreduce
 it refreshes every 3 sec but never going to the correct tracking page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3599) Fix the javadoc of DelegationTokenSecretManager in hadoop-yarn

2015-05-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534631#comment-14534631
 ] 

Junping Du commented on YARN-3599:
--

Hi [~ozawa], Thanks for reviewing the patch here. Can you check my last 
comments (and coming discussion) in YARN-3587 before any commit effort? Thx!

 Fix the javadoc of DelegationTokenSecretManager in hadoop-yarn
 --

 Key: YARN-3599
 URL: https://issues.apache.org/jira/browse/YARN-3599
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: documentation
Reporter: Gabor Liptak
Priority: Trivial
 Attachments: YARN-3599.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3521) Support return structured NodeLabel objects in REST API when call getClusterNodeLabels

2015-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534632#comment-14534632
 ] 

Hadoop QA commented on YARN-3521:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 47s | The applied patch generated  
17 new checkstyle issues (total was 61, now 66). |
| {color:red}-1{color} | whitespace |   0m  5s | The patch has 18  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 14s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | yarn tests |  53m 26s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | |  90m  6s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731449/0006-YARN-3521.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7b1ea9c |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/7802/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/7802/artifact/patchprocess/whitespace.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7802/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/7802/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7802/console |


This message was automatically generated.

 Support return structured NodeLabel objects in REST API when call 
 getClusterNodeLabels
 --

 Key: YARN-3521
 URL: https://issues.apache.org/jira/browse/YARN-3521
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Sunil G
 Attachments: 0001-YARN-3521.patch, 0002-YARN-3521.patch, 
 0003-YARN-3521.patch, 0004-YARN-3521.patch, 0005-YARN-3521.patch, 
 0006-YARN-3521.patch


 In YARN-3413, yarn cluster CLI returns NodeLabel instead of String, we should 
 make the same change in REST API side to make them consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3554) Default value for maximum nodemanager connect wait time is too high

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534728#comment-14534728
 ] 

Hudson commented on YARN-3554:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7775 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7775/])
YARN-3554. Default value for maximum nodemanager connect wait time is too high. 
Contributed by Naganarasimha G R (jlowe: rev 
9757864fd662b69445e0c600aedbe307a264982e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


 Default value for maximum nodemanager connect wait time is too high
 ---

 Key: YARN-3554
 URL: https://issues.apache.org/jira/browse/YARN-3554
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Naganarasimha G R
  Labels: BB2015-05-RFC, newbie
 Fix For: 2.7.1

 Attachments: YARN-3554-20150429-2.patch, YARN-3554.20150429-1.patch


 The default value for yarn.client.nodemanager-connect.max-wait-ms is 90 
 msec or 15 minutes, which is way too high.  The default container expiry time 
 from the RM and the default task timeout in MapReduce are both only 10 
 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3600) AM container link is broken (on a killed application, at least)

2015-05-08 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated YARN-3600:

Labels:   (was: BB2015-05-RFC)

 AM container link is broken (on a killed application, at least)
 ---

 Key: YARN-3600
 URL: https://issues.apache.org/jira/browse/YARN-3600
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Sergey Shelukhin
Assignee: Naganarasimha G R
 Attachments: YARN-3600.20150508-1.patch


 Running some fairly recent (couple weeks ago) version of 2.8.0-SNAPSHOT. 
 I have an application that ran fine for a while and then I yarn kill-ed it. 
 Now when I go to the only app attempt URL (like so: http://(snip RM host 
 name):8088/cluster/appattempt/appattempt_1429683757595_0795_01)
 I see:
 AM Container: container_1429683757595_0795_01_01
 Node: N/A 
 and the container link goes to {noformat}http://(snip RM host 
 name):8088/cluster/N/A
 {noformat}
 which obviously doesn't work



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3526) ApplicationMaster tracking URL is incorrectly redirected on a QJM cluster

2015-05-08 Thread Yang Weiwei (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534640#comment-14534640
 ] 

Yang Weiwei commented on YARN-3526:
---

Hello [~steve_l] and [~vinodkv] 

I uploaded a new patch, I added test case into existing class 
TestRMFailover.testRMWebAppRedirect, where I believe that is the best place for 
it. However I found this method is marked as Ignore since YARN-2605 was 
committed. 

I tried to run the test case by reverting the changes YARN-2605 made, it works 
fine. But with the changes, I got NPEs. I created another issue to fix this 
unit test, need to discuss with [~xgong]. The JIRA number is YARN-3601.

 ApplicationMaster tracking URL is incorrectly redirected on a QJM cluster
 -

 Key: YARN-3526
 URL: https://issues.apache.org/jira/browse/YARN-3526
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
Affects Versions: 2.6.0
 Environment: Red Hat Enterprise Linux Server 6.4 
Reporter: Yang Weiwei
Assignee: Yang Weiwei
  Labels: BB2015-05-TBR
 Attachments: YARN-3526.001.patch


 On a QJM HA cluster, view RM web UI to track job status, it shows
 This is standby RM. Redirecting to the current active RM: 
 http://active-RM:8088/proxy/application_1427338037905_0008/mapreduce
 it refreshes every 3 sec but never going to the correct tracking page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3601) Fix UT TestRMFailover.testRMWebAppRedirect

2015-05-08 Thread Yang Weiwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Weiwei updated YARN-3601:
--
Assignee: (was: Yang Weiwei)

 Fix UT TestRMFailover.testRMWebAppRedirect
 --

 Key: YARN-3601
 URL: https://issues.apache.org/jira/browse/YARN-3601
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, webapp
 Environment: Red Hat Enterprise Linux Workstation release 6.5 
 (Santiago)
Reporter: Yang Weiwei
Priority: Critical
  Labels: BB2015-05-TBR, test

 This test case was not working since the commit from YARN-2605. It failed 
 with NPE exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3584) [Log mesage correction] : MIssing space in Diagnostics message

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534487#comment-14534487
 ] 

Hudson commented on YARN-3584:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #921 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/921/])
YARN-3584. Fixed attempt diagnostics format shown on the UI. Contributed by 
nijel (jianhe: rev b88700dcd0b9aa47662009241dfb83bc4446548d)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java


 [Log mesage correction] : MIssing space in Diagnostics message
 --

 Key: YARN-3584
 URL: https://issues.apache.org/jira/browse/YARN-3584
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
Priority: Trivial
  Labels: newbie
 Fix For: 2.8.0

 Attachments: YARN-3584-1.patch, YARN-3584-2.patch


 For more detailed output, check application tracking page: 
 https://szxciitslx17640:26001/cluster/app/application_1430810985970_0020{color:red}Then{color},
  click on links to logs of each attempt.
 In this Then is not part of thr URL. Better to use a space in between so that 
 the URL can be copied directly for analysis



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3589) RM and AH web UI display DOCTYPE wrongly

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534493#comment-14534493
 ] 

Hudson commented on YARN-3589:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #921 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/921/])
YARN-3589. RM and AH web UI display DOCTYPE wrongly. Contbituted by Rohith. 
(ozawa: rev f26700f2878f4374c68e97ee00205eda5a6d022c)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/HtmlPage.java


 RM and AH web UI display DOCTYPE wrongly
 

 Key: YARN-3589
 URL: https://issues.apache.org/jira/browse/YARN-3589
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.8.0
Reporter: Rohith
Assignee: Rohith
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: 0001-YARN-3589.patch, YARN-3589.PNG


 RM web app UI display {{!DOCTYPE html PUBLIC -\/\/W3C\/\/DTD HTML 
 4.01\/\/EN http:\/\/www.w3.org\/TR\/html4\/strict.dtd}} which is not 
 necessary.
 This is because, content of html page is escaped which result browser cant 
 not parse it. Any content which is escaped should be with the HTML block , 
 but doc type is above html which browser can't parse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1832) Fix wrong MockLocalizerStatus#equals implementation

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534485#comment-14534485
 ] 

Hudson commented on YARN-1832:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #921 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/921/])
YARN-1832. Fix wrong MockLocalizerStatus#equals implementation. Contributed by 
Hong Zhiguo. (aajisaka: rev b167fe7605deb29ec533047d79d036eb65328853)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/MockLocalizerStatus.java


 Fix wrong MockLocalizerStatus#equals implementation
 ---

 Key: YARN-1832
 URL: https://issues.apache.org/jira/browse/YARN-1832
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
Reporter: Hong Zhiguo
Assignee: Hong Zhiguo
Priority: Minor
 Fix For: 2.8.0

 Attachments: YARN-1832.patch


 return getLocalizerId().equals(other)  ...; should be
 return getLocalizerId().equals(other. getLocalizerId())  ...;
 getLocalizerId() returns String. It's expected to compare 
 this.getLocalizerId() against other.getLocalizerId().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3169) Drop YARN's overview document

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534506#comment-14534506
 ] 

Hudson commented on YARN-3169:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #921 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/921/])
YARN-3169. Drop YARN's overview document. Contributed by Brahma Reddy Battula. 
(ozawa: rev b419c1b2ec452c2632274e93240b595161fb023b)
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/index.md
* hadoop-yarn-project/CHANGES.txt
* hadoop-project/src/site/site.xml


 Drop YARN's overview document
 -

 Key: YARN-3169
 URL: https://issues.apache.org/jira/browse/YARN-3169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: YARN-3169-002.patch, YARN-3169-003.patch, YARN-3169.patch


 It's pretty superfluous given there is a site index on the left.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3358) Audit log not present while refreshing Service ACLs

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534501#comment-14534501
 ] 

Hudson commented on YARN-3358:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #921 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/921/])
YARN-3358. Audit log not present while refreshing Service ACLs. (devaraj: rev 
ef3d66d4624d360e75c016e36824a6782d6a9746)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java


 Audit log not present while refreshing Service ACLs
 ---

 Key: YARN-3358
 URL: https://issues.apache.org/jira/browse/YARN-3358
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.1

 Attachments: YARN-3358.01.patch


 There should be a success audit log in AdminService#refreshServiceAcls



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3523) Cleanup ResourceManagerAdministrationProtocol interface audience

2015-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534490#comment-14534490
 ] 

Hudson commented on YARN-3523:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #921 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/921/])
YARN-3523. Cleanup ResourceManagerAdministrationProtocol interface audience. 
Contributed by Naganarasimha G R (junping_du: rev 
8e991f4b1d7226fdcd75c5dc9fe6e5ce721679b9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocol.java
* hadoop-yarn-project/CHANGES.txt


 Cleanup ResourceManagerAdministrationProtocol interface audience
 

 Key: YARN-3523
 URL: https://issues.apache.org/jira/browse/YARN-3523
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client, resourcemanager
Reporter: Wangda Tan
Assignee: Naganarasimha G R
  Labels: newbie
 Fix For: 2.8.0

 Attachments: YARN-3523.20150422-1.patch, YARN-3523.20150504-1.patch, 
 YARN-3523.20150505-1.patch


 I noticed ResourceManagerAdministrationProtocol has @Private audience for the 
 class and @Public audience for methods. It doesn't make sense to me. We 
 should make class audience and methods audience consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3170) YARN architecture document needs updating

2015-05-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534607#comment-14534607
 ] 

Brahma Reddy Battula commented on YARN-3170:


[~ozawa] I agree with you, it's make sense.. Updated the patch..Kindly 
Review..Thanks

 YARN architecture document needs updating
 -

 Key: YARN-3170
 URL: https://issues.apache.org/jira/browse/YARN-3170
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: documentation
Reporter: Allen Wittenauer
Assignee: Brahma Reddy Battula
  Labels: BB2015-05-TBR
 Attachments: YARN-3170-002.patch, YARN-3170-003.patch, YARN-3170.patch


 The marketing paragraph at the top, NextGen MapReduce, etc are all 
 marketing rather than actual descriptions. It also needs some general 
 updates, esp given it reads as though 0.23 was just released yesterday.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3554) Default value for maximum nodemanager connect wait time is too high

2015-05-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534554#comment-14534554
 ] 

Jason Lowe commented on YARN-3554:
--

+1, committing this.

 Default value for maximum nodemanager connect wait time is too high
 ---

 Key: YARN-3554
 URL: https://issues.apache.org/jira/browse/YARN-3554
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Naganarasimha G R
  Labels: BB2015-05-RFC, newbie
 Attachments: YARN-3554-20150429-2.patch, YARN-3554.20150429-1.patch


 The default value for yarn.client.nodemanager-connect.max-wait-ms is 90 
 msec or 15 minutes, which is way too high.  The default container expiry time 
 from the RM and the default task timeout in MapReduce are both only 10 
 minutes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3276) Refactor and fix null casting in some map cast for TimelineEntity (old and new) and fix findbug warnings

2015-05-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534572#comment-14534572
 ] 

Hadoop QA commented on YARN-3276:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 42s | Pre-patch YARN-2928 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   1m 22s | The applied patch generated 
13 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 12s | The applied patch generated  1 
new checkstyle issues (total was 83, now 84). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 42s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | yarn tests |   0m 22s | Tests passed in 
hadoop-yarn-api. |
| {color:green}+1{color} | yarn tests |   1m 56s | Tests passed in 
hadoop-yarn-common. |
| | |  41m 52s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731441/YARN-3276-YARN-2928.v4.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | YARN-2928 / d4a2362 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-YARN-Build/7801/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/7801/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
 |
| hadoop-yarn-api test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7801/artifact/patchprocess/testrun_hadoop-yarn-api.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7801/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/7801/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7801/console |


This message was automatically generated.

 Refactor and fix null casting in some map cast for TimelineEntity (old and 
 new) and fix findbug warnings
 

 Key: YARN-3276
 URL: https://issues.apache.org/jira/browse/YARN-3276
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Junping Du
Assignee: Junping Du
 Attachments: YARN-3276-YARN-2928.v3.patch, 
 YARN-3276-YARN-2928.v4.patch, YARN-3276-v2.patch, YARN-3276-v3.patch, 
 YARN-3276.patch


 Per discussion in YARN-3087, we need to refactor some similar logic to cast 
 map to hashmap and get rid of NPE issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >