[jira] [Updated] (YARN-5189) Make HBaseTimeline[Reader|Writer]Impl default and move FileSystemTimeline*Impl

2016-05-31 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5189:
---
Attachment: YARN-5189-YARN-2928.01.patch

Initial patch.
To avoid confusion and users not noticing that FileSystemTimelineWriterImpl is 
now in a test package I renamed TestFileSystemTimelineWriterImpl to 
FileSystemTimelineWriterImplTest and then renamed 
FileSystemTimelineWriterImpl->TestFileSystemTimelineWriterImpl

> Make HBaseTimeline[Reader|Writer]Impl default and move FileSystemTimeline*Impl
> --
>
> Key: YARN-5189
> URL: https://issues.apache.org/jira/browse/YARN-5189
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5189-YARN-2928.01.patch
>
>
> [~naganarasimha...@apache.org] questioned whether it made sense to default to 
> an implementation that doesn't support all functionality.
> [~sjlee0] opened YARN-5174 to track updating the documentation for ATS to 
> reflect the default shifting to the fully functional HBase implementation.
> It makes sense to remove a partial implementation, but on the other hand it 
> is still handing in testing. Hence this jira to move the file based 
> implementations to the test package and to make the HBase impls the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4327) RM can not renew TIMELINE_DELEGATION_TOKEN in secure clusters

2016-05-31 Thread linou.zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309406#comment-15309406
 ] 

linou.zhang commented on YARN-4327:
---

I met the same issue, how about the status?
I try to set up Tez UI, if I use simple, I can open the Web UI for timeline 
server,
but I cannot submit job.
If use kerberos, job can be submit, but timeline server web UI cannot be open.

> RM can not renew  TIMELINE_DELEGATION_TOKEN in secure clusters
> --
>
> Key: YARN-4327
> URL: https://issues.apache.org/jira/browse/YARN-4327
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineserver
>Affects Versions: 2.7.1
> Environment: hadoop 2.7.1hdfs,yarn, mrhistoryserver, ATS all use 
> kerberos security.
> conf like this:
> 
>   hadoop.security.authorization
>   true
>   Is service-level authorization enabled?
> 
> 
>   hadoop.security.authentication
>   kerberos
>   Possible values are simple (no authentication), and kerberos
>   
> 
>Reporter: zhangshilong
>
> bin hadoop 2.7.1
> ATS conf like this: 
> 
> yarn.timeline-service.http-authentication.type
> simple
> 
> 
> yarn.timeline-service.http-authentication.kerberos.principal
> HTTP/_h...@xxx.com
> 
> 
> yarn.timeline-service.http-authentication.kerberos.keytab
> /etc/hadoop/keytabs/xxx.keytab
> 
> 
> yarn.timeline-service.principal
> xxx/_h...@xxx.com
> 
> 
> yarn.timeline-service.keytab
> /etc/hadoop/keytabs/xxx.keytab
> 
> 
> yarn.timeline-service.best-effort
> true
> 
> 
> yarn.timeline-service.enabled
> true
>   
>  
> I'd like to allow everyone to access ATS from HTTP as RM,HDFS.
> client can submit job to RM and  add TIMELINE_DELEGATION_TOKEN  to AM 
> Context, but RM can not renew  TIMELINE_DELEGATION_TOKEN and make application 
> to failure.
> RM logs:
> 2015-11-03 11:58:38,191 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN, 
> Service: 10.12.38.4:8188, Ident: (owner=yarn-test, renewer=yarn-test, 
> realUser=, issueDate=1446523118046, maxDate=1447127918046, sequenceNumber=9, 
> masterKeyId=2)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:439)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:847)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:828)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: HTTP status [500], message [Null user]
> at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:287)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:212)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:414)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$3.run(TimelineClientImpl.java:396)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$3.run(TimelineClientImpl.java:378)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$5.run(TimelineClientImpl.java:451)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:183)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:466)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.renewDelegationToken(TimelineClientImpl.java:400)
> at 
> org.apache.hadoop.yarn.security.cli

[jira] [Commented] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread ChenFolin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309375#comment-15309375
 ] 

ChenFolin commented on YARN-5188:
-

Thanks Xianyin Xin,

TreeSet oparate time complexity here is : O(log1) + O(log2) + ... + O(log(app 
num)), and binary search O(log(app num)).

point 2 is the same with YARN-4090.

And can you tell me your phone number or wechat or qq ? Send a mail to 
chenfo...@gmail.com 

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0, 2.8.0
>Reporter: ChenFolin
> Attachments: YARN-5188.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5046) [Umbrella] De-duplicate scheduler code

2016-05-31 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5046:
---
Summary: [Umbrella] De-duplicate scheduler code  (was: [Umbrella] Refactor 
scheduler code)

> [Umbrella] De-duplicate scheduler code
> --
>
> Key: YARN-5046
> URL: https://issues.apache.org/jira/browse/YARN-5046
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler, fairscheduler, resourcemanager, 
> scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: technical_debt
>
> At this point in time, there are several places where code common to the 
> schedulers can be moved from one or more of the schedulers into 
> AbstractYARNScheduler or a related interface.
> Creating this umbrella JIRA to track this refactoring.  In general, it is 
> preferable to create a subtask JIRA on a per-method basis.
> This may need some coordination with [YARN-3091  \[Umbrella\] Improve and fix 
> locks of RM scheduler|https://issues.apache.org/jira/browse/YARN-3091].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5180) Add ensureExecutionType boolean flag in ResourceRequest

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309358#comment-15309358
 ] 

Hadoop QA commented on YARN-5180:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 16s 
{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807335/YARN-5180.003.patch |
| JIRA Issue | YARN-5180 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11786/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add ensureExecutionType boolean flag in ResourceRequest
> ---
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch, 
> YARN-5180.003.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309350#comment-15309350
 ] 

Hadoop QA commented on YARN-4844:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 30s 
{color} | {color:red} Docker failed to build yetus/hadoop:c60792e. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807330/YARN-4844-branch-2.8.0016.patch
 |
| JIRA Issue | YARN-4844 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11785/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> -
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-4844-branch-2.8.0016.patch, 
> YARN-4844-branch-2.addendum.1.patch, YARN-4844.1.patch, YARN-4844.10.patch, 
> YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, 
> YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, 
> YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, 
> YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, 
> YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, 
> YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource

2016-05-31 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309347#comment-15309347
 ] 

Varun Vasudev commented on YARN-4844:
-

The addendum patch looks good to me [~leftnoteasy]. Will you commit it or 
should I?

> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> -
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-4844-branch-2.8.0016.patch, 
> YARN-4844-branch-2.addendum.1.patch, YARN-4844.1.patch, YARN-4844.10.patch, 
> YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, 
> YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, 
> YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, 
> YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, 
> YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, 
> YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread Xianyin Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309341#comment-15309341
 ] 

Xianyin Xin commented on YARN-5188:
---

hi [~chenfolin], thanks for reporting this.
{quote}
1: application sort before assign container at FSLeafQueue. TreeSet is not the 
best, Why not keep orderly ? and then we can use binary search to help keep 
orderly when a application's resource usage has changed.
{quote}
Can you please explain more and analyse the performance improvement after you 
have adopted the new approach?
{quote}
2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
resource usage ,Why can we store the leafqueue usage at memory and update it 
when assign container op release container happen?
{quote}
Can you verify if this is the same with YARN-4090? And if not, what's the 
difference?

Thanks.

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0, 2.8.0
>Reporter: ChenFolin
> Attachments: YARN-5188.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5180) Add ensureExecutionType boolean flag in ResourceRequest

2016-05-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309325#comment-15309325
 ] 

Arun Suresh edited comment on YARN-5180 at 6/1/16 5:51 AM:
---

Updating patch addressing review comments suggestions. Thanks [~kkaranasos] and 
[~kasha].

As per offline discussions with [~kasha], I have also raised MAPREDUCE-6709 to 
expose the *ensureExecutionType* flag as a user specified configuration.


was (Author: asuresh):
Updating patch addressing review comments suggestions. Thanks [~kkaranasos] and 
[~kasha].

As per offline discussions with [~kasha], I have also raised MAPREDUCE-6709 to 
track the ability to specify the *ensureExecutionType* value for MR requests. 

> Add ensureExecutionType boolean flag in ResourceRequest
> ---
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch, 
> YARN-5180.003.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5180) Add ensureExecutionType boolean flag in ResourceRequest

2016-05-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5180:
--
Attachment: YARN-5180.003.patch

Updating patch addressing review comments suggestions. Thanks [~kkaranasos] and 
[~kasha].

As per offline discussions with [~kasha], I have also raised MAPREDUCE-6709 to 
track the ability to specify the *ensureExecutionType* value for MR requests. 

> Add ensureExecutionType boolean flag in ResourceRequest
> ---
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch, 
> YARN-5180.003.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread ChenFolin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChenFolin updated YARN-5188:

Description: 
 My Hadoop Cluster has recently encountered a performance problem. Details as 
Follows.

There are two point which can cause this performance issue.
1: application sort before assign container at FSLeafQueue. TreeSet is not the 
best, Why not keep orderly ? and then we can use binary search to help keep 
orderly when a application's resource usage has changed.
2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
resource usage ,Why can we store the leafqueue usage at memory and update it 
when assign container op release container happen?
   
   The efficiency of assign container in the Resourcemanager may fall when 
the number of running and pending application grows. And the fact is the 
cluster has too many PendingMB or PengdingVcore , and the Cluster 
current utilization rate may below 20%.
   I checked the resourcemanager logs, I found that every assign container 
may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
 
   I use TestFairScheduler to reproduce the scene:
 
   Just one queue: root.defalut
 10240 apps.
 
   assign container avg time:  6753.9 us ( 6.7539 ms)  
 apps sort time (FSLeafQueue : Collections.sort(runnableApps, comparator); 
): 4657.01 us ( 4.657 ms )
 compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
 
 When just root.default, one assign container op contains : ( one apps sort 
op ) + 2 * ( compute leafqueue usage op )
   According to the above situation, I think the assign container op has a 
performance problem  . 


  was:
 My Hadoop Cluster has recently encountered a performance problem. Details as 
Follows.
   
   The efficiency of assign container in the Resourcemanager may fall when 
the number of running and pending application grows. And the fact is the 
cluster has too many PendingMB or PengdingVcore , and the Cluster 
current utilization rate may below 20%.
   I checked the resourcemanager logs, I found that every assign container 
may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
 
   I use TestFairScheduler to reproduce the scene:
 
   Just one queue: root.defalut
 10240 apps.
 
   assign container avg time:  6753.9 us ( 6.7539 ms)  
 apps sort time (FSLeafQueue : Collections.sort(runnableApps, comparator); 
): 4657.01 us ( 4.657 ms )
 compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
 
 When just root.default, one assign container op contains : ( one apps sort 
op ) + 2 * ( compute leafqueue usage op )
   According to the above situation, I think the assign container op has a 
performance problem  . 



> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0, 2.8.0
>Reporter: ChenFolin
> Attachments: YARN-5188.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail

[jira] [Commented] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread ChenFolin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309314#comment-15309314
 ] 

ChenFolin commented on YARN-5188:
-

There are two point which can cause this performance issue.
1: application sort before assign container at FSLeafQueue. TreeSet is not the 
best, Why not keep orderly ? and then we can use binary search to help keep 
orderly when a application's resource usage has changed.
2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
resource usage ,Why can we store the leafqueue usage at memory and update it 
when assign container op release container happen?

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0, 2.8.0
>Reporter: ChenFolin
> Attachments: YARN-5188.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5189) Make HBaseTimeline[Reader|Writer]Impl default and move FileSystemTimeline*Impl

2016-05-31 Thread Joep Rottinghuis (JIRA)
Joep Rottinghuis created YARN-5189:
--

 Summary: Make HBaseTimeline[Reader|Writer]Impl default and move 
FileSystemTimeline*Impl
 Key: YARN-5189
 URL: https://issues.apache.org/jira/browse/YARN-5189
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: YARN-2928
Reporter: Joep Rottinghuis
Assignee: Joep Rottinghuis


[~naganarasimha...@apache.org] questioned whether it made sense to default to 
an implementation that doesn't support all functionality.

[~sjlee0] opened YARN-5174 to track updating the documentation for ATS to 
reflect the default shifting to the fully functional HBase implementation.

It makes sense to remove a partial implementation, but on the other hand it is 
still handing in testing. Hence this jira to move the file based 
implementations to the test package and to make the HBase impls the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309309#comment-15309309
 ] 

Hadoop QA commented on YARN-5088:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 50s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 7 
new + 140 unchanged - 3 fixed = 147 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 12 new + 4567 unchanged - 0 fixed = 4579 total (was 4567) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 280 unchanged - 0 fixed = 281 total (was 280) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 45s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 42s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 41s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 119m 0s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.cli.TestLogsCLI |
|   | hadoop.yarn.client.TestGetGroups |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
| Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |

[jira] [Updated] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource

2016-05-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4844:
-
Attachment: (was: YARN-4844.16.branch-2.8.patch)

> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> -
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-4844-branch-2.8.0016.patch, 
> YARN-4844-branch-2.addendum.1.patch, YARN-4844.1.patch, YARN-4844.10.patch, 
> YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, 
> YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, 
> YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, 
> YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, 
> YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, 
> YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource

2016-05-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4844:
-
Attachment: YARN-4844-branch-2.8.0016.patch

Attached new patch for branch-2.8 with proper name.

> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> -
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-4844-branch-2.8.0016.patch, 
> YARN-4844-branch-2.addendum.1.patch, YARN-4844.1.patch, YARN-4844.10.patch, 
> YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, 
> YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch, 
> YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, 
> YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, 
> YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, 
> YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309278#comment-15309278
 ] 

Hadoop QA commented on YARN-5188:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 11 new + 71 unchanged - 0 fixed = 82 total (was 71) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 6 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 8s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestAppRunnability |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestMaxRunningAppsEnforcer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807321/YARN-5188.patch |
| JIRA Issue | YARN-5188 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 410074df29a2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8ceb06e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/

[jira] [Updated] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource

2016-05-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4844:
-
Attachment: YARN-4844-branch-2.addendum.1.patch

> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> -
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-4844-branch-2.addendum.1.patch, YARN-4844.1.patch, 
> YARN-4844.10.patch, YARN-4844.11.patch, YARN-4844.12.patch, 
> YARN-4844.13.patch, YARN-4844.14.patch, YARN-4844.15.patch, 
> YARN-4844.16.branch-2.8.patch, YARN-4844.16.branch-2.patch, 
> YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, 
> YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, 
> YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, 
> YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource

2016-05-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened YARN-4844:
--

> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> -
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-4844.1.patch, YARN-4844.10.patch, 
> YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, 
> YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.8.patch, 
> YARN-4844.16.branch-2.patch, YARN-4844.16.patch, YARN-4844.2.patch, 
> YARN-4844.3.patch, YARN-4844.4.patch, YARN-4844.5.patch, YARN-4844.6.patch, 
> YARN-4844.7.patch, YARN-4844.8.branch-2.patch, YARN-4844.8.patch, 
> YARN-4844.9.branch, YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource

2016-05-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309277#comment-15309277
 ] 

Wangda Tan commented on YARN-4844:
--

Thanks [~vvasudev] for review and commit! However, I found a couple of issues 
for committed patch, I just came back from vacation, sorry for the late 
response:
- For branch-2. there're some javac warnings, will fix them in addendum patch.
- For branch-2.8, Jenkins doesn't run for the latest patch, will revert and 
re-attach patch for branch-2.8.

Some additional notes:
- So far, failed unit tests are not related, have tried to run them locally.
- Not sure if Javadocs warnings are related, there're too many unrelated 
entries in report them, cannot figure out the problem from the report file. 
Will monitor this and fix Javadocs warning separately.

> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> -
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-4844.1.patch, YARN-4844.10.patch, 
> YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch, 
> YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.8.patch, 
> YARN-4844.16.branch-2.patch, YARN-4844.16.patch, YARN-4844.2.patch, 
> YARN-4844.3.patch, YARN-4844.4.patch, YARN-4844.5.patch, YARN-4844.6.patch, 
> YARN-4844.7.patch, YARN-4844.8.branch-2.patch, YARN-4844.8.patch, 
> YARN-4844.9.branch, YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309241#comment-15309241
 ] 

Karthik Kambatla commented on YARN-5188:


[~chenfolin] - thanks for reporting and working on this. 

Can you elaborate on the root cause of this performance issue and the key idea 
behind your patch? 

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0, 2.8.0
>Reporter: ChenFolin
> Attachments: YARN-5188.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread ChenFolin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChenFolin updated YARN-5188:

Attachment: YARN-5188.patch

attach a patch file

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0, 2.8.0
>Reporter: ChenFolin
> Attachments: YARN-5188.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5188) FairScheduler performance bug

2016-05-31 Thread ChenFolin (JIRA)
ChenFolin created YARN-5188:
---

 Summary: FairScheduler performance bug
 Key: YARN-5188
 URL: https://issues.apache.org/jira/browse/YARN-5188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.5.0, 2.8.0
Reporter: ChenFolin


 My Hadoop Cluster has recently encountered a performance problem. Details as 
Follows.
   
   The efficiency of assign container in the Resourcemanager may fall when 
the number of running and pending application grows. And the fact is the 
cluster has too many PendingMB or PengdingVcore , and the Cluster 
current utilization rate may below 20%.
   I checked the resourcemanager logs, I found that every assign container 
may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
 
   I use TestFairScheduler to reproduce the scene:
 
   Just one queue: root.defalut
 10240 apps.
 
   assign container avg time:  6753.9 us ( 6.7539 ms)  
 apps sort time (FSLeafQueue : Collections.sort(runnableApps, comparator); 
): 4657.01 us ( 4.657 ms )
 compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
 
 When just root.default, one assign container op contains : ( one apps sort 
op ) + 2 * ( compute leafqueue usage op )
   According to the above situation, I think the assign container op has a 
performance problem  . 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309217#comment-15309217
 ] 

Varun Saxena commented on YARN-5105:


Thanks [~sjlee0] for the review and commit.
Thanks [~jrottinghuis] for the review.

> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Fix For: YARN-2928
>
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch, YARN-5105-YARN-2928.05.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5187) when the preempt reduce happen, map resources priority should be higher

2016-05-31 Thread tangshangwen (JIRA)
tangshangwen created YARN-5187:
--

 Summary: when the preempt reduce happen, map  resources priority 
should be higher
 Key: YARN-5187
 URL: https://issues.apache.org/jira/browse/YARN-5187
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: tangshangwen
Assignee: tangshangwen


in our cluster, i found job hang long time, When the cluster resources 
nervous,many reduce were killed,  map no resources to run,i think when the 
preempt reduce happen, map  resources priority should be higher



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5088:

Attachment: YARN-5088.6.patch

> Improve "yarn log" command-line to read the last K bytes for the log files
> --
>
> Key: YARN-5088
> URL: https://issues.apache.org/jira/browse/YARN-5088
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5088.1.patch, YARN-5088.2.patch, YARN-5088.3.patch, 
> YARN-5088.4.patch, YARN-5088.5.patch, YARN-5088.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309165#comment-15309165
 ] 

Hadoop QA commented on YARN-5164:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 9 new + 25 unchanged - 1 fixed = 34 total (was 26) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 7s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 58s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Dead store to p in 
org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityOverTimePolicy.validate(Plan,
 ReservationAllocation)  At 
CapacityOverTimePolicy.java:org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityOverTimePolicy.validate(Plan,
 ReservationAllocation)  At CapacityOverTimePolicy.java:[line 171] |
|  |  Dead store to p in 
org.apache.hadoop.yarn.server.resourcemanager.reservation.NoOverCommitPolicy.validate(Plan,
 ReservationAllocation)  At 
NoOverCommitPolicy.java:org.apache.hadoop.yarn.server.resourcemanager.reservation.NoOverCommitPolicy.validate(Plan,
 ReservationAllocation)  At NoOverCommitPolicy.java:[line 62] |
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807313/YARN-5164-inclusive.5.patch
 |
| JIRA Issue | YARN-5164 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e4009246fa27 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x

[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309111#comment-15309111
 ] 

Hadoop QA commented on YARN-4525:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787521/YARN-4525.patch |
| JIRA Issue | YARN-4525 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux df6bf99e686e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8ceb06e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11779/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11779/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11779/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
h

[jira] [Commented] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309094#comment-15309094
 ] 

Hadoop QA commented on YARN-5088:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 7 
new + 140 unchanged - 3 fixed = 147 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 12 new + 4567 unchanged - 0 fixed = 4579 total (was 4567) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 280 unchanged - 0 fixed = 281 total (was 280) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 42s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 42s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 109m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.cli.TestLogsCLI |
|   | hadoop.yarn.client.TestGetGroups |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
| Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient

[jira] [Updated] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5164:
---
Attachment: YARN-5164-inclusive.5.patch

> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164-inclusive.4.patch, 
> YARN-5164-inclusive.5.patch, YARN-5164.1.patch, YARN-5164.2.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309069#comment-15309069
 ] 

Hadoop QA commented on YARN-5164:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-5164 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807303/YARN-5164-inclusive.4.patch
 |
| JIRA Issue | YARN-5164 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11780/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164-inclusive.4.patch, 
> YARN-5164.1.patch, YARN-5164.2.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats

2016-05-31 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309062#comment-15309062
 ] 

Daniel Templeton commented on YARN-4308:


[~sunilg], I don't think just fixing the existing tests to not break with the 
change is enough.  I would prefer it if we could test both what happens when 
the {{cpuUsagePercent}} is 0 and when it's -1.  Right now we're not testing 
that the -1's are skipped correctly.

> ContainersAggregated CPU resource utilization reports negative usage in first 
> few heartbeats
> 
>
> Key: YARN-4308
> URL: https://issues.apache.org/jira/browse/YARN-4308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4308.patch, 0002-YARN-4308.patch, 
> 0003-YARN-4308.patch, 0004-YARN-4308.patch
>
>
> NodeManager reports ContainerAggregated CPU resource utilization as -ve value 
> in first few heartbeats cycles. I added a new debug print and received below 
> values from heartbeats.
> {noformat}
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  ContainersResource Utilization : CpuTrackerUsagePercent : -1.0 
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:ContainersResource
>  Utilization :  CpuTrackerUsagePercent : 198.94598
> {noformat}
> Its better we send 0 as CPU usage rather than sending a negative values in 
> heartbeats eventhough its happening in only first few heartbeats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5164:
---
Attachment: YARN-5164-inclusive.4.patch

Patch includes YARN-5165, YARN-5185, YARN-4525, I will upload stripped-down 
version as other patches are committed

> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164-inclusive.4.patch, 
> YARN-5164.1.patch, YARN-5164.2.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case

2016-05-31 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309051#comment-15309051
 ] 

Carlo Curino commented on YARN-5185:


Bug triggered as part of tests of YARN-5165/YARN-5164.

> StageAllocaterGreedyRLE: NPE in corner case 
> 
>
> Key: YARN-5185
> URL: https://issues.apache.org/jira/browse/YARN-5185
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5185.1.patch
>
>
> If the plan has only one interval, and the reservation exactly overlap we 
> will have a null from partialMap.higherKey() that we should guard against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)

2016-05-31 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309040#comment-15309040
 ] 

Carlo Curino commented on YARN-4525:


It is hard to construct a simple test (it depends on various subMaps 
invocations I guess). But as part of YARN-5164 I am have a test 
{{TestCapacityOvertimePolicy.testFailAvgBySum()}} that positively triggers this 
bug.

> Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
> ---
>
> Key: YARN-4525
> URL: https://issues.apache.org/jira/browse/YARN-4525
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ishai Menache
>Assignee: Ishai Menache
> Attachments: YARN-4525.patch
>
>
> One of our tests detected a corner case in getRangeOverlapping: When the 
> RLESparseResourceAllocation object is a result of a merge operation, the 
> underlying map is a "view" within some range. If  'end' is outside that 
> range, headMap(..) throws an uncaught exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5186) Adding config to enable/disable completed reservations in queue metrics

2016-05-31 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-5186:
--

 Summary: Adding config to enable/disable completed reservations in 
queue metrics
 Key: YARN-5186
 URL: https://issues.apache.org/jira/browse/YARN-5186
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Giovanni Matteo Fumarola
Priority: Minor


After some stress testing on reservations we found out that Yarn queue metrics 
show up in JMX for both finished and active reservations. This causes JMX to 
accumulate queue metrics for all finished reservations and become in efficient 
to use. 
We want to introduce a config parameter to enable/disable the finished 
reservations in JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15309016#comment-15309016
 ] 

Hadoop QA commented on YARN-4887:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807294/YARN-4887-v4.patch |
| JIRA Issue | YARN-4887 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 1d3c8d51593a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 29d6cad |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11778/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build

[jira] [Updated] (YARN-5167) Escaping occurences of encodedValues

2016-05-31 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5167:
--
Labels: yarn-2928-1st-milestone  (was: )

> Escaping occurences of encodedValues
> 
>
> Key: YARN-5167
> URL: https://issues.apache.org/jira/browse/YARN-5167
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Sangjin Lee
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> We had earlier decided to punt on this, but in discussing YARN-5109 we 
> thought it would be best to just be safe rather than sorry later on.
> Encoded sequences can occur in the original string, especially in case of 
> "foreign key" if we decide to have lookups.
> For example, space is encoded as %2$.
> Encoding "String with %2$ in it" would decode to "String with   in it".
> We though we should first escape existing occurrences of encoded strings by 
> prefixing a backslash (even if there is already a backslash that should be 
> ok). Then we should replace all unencoded strings.
> On the way out, we should replace all occurrences of our encoded string to 
> the original except when it is prefixed by an escape character. Lastly we 
> should strip off the one additional backslash in front of each remaining 
> (escaped) sequence.
> If we add the following entry to TestSeparator#testEncodeDecode() that 
> demonstrates what this jira should accomplish:
> {code}
> testEncodeDecode("Double-escape %2$ and %3$ or \\%2$ or \\%3$, nor  
> %2$ = no problem!", Separator.QUALIFIERS,
> Separator.VALUES, Separator.SPACE, Separator.TAB);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-4525:
---
Flags: Patch

> Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
> ---
>
> Key: YARN-4525
> URL: https://issues.apache.org/jira/browse/YARN-4525
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ishai Menache
>Assignee: Ishai Menache
> Attachments: YARN-4525.patch
>
>
> One of our tests detected a corner case in getRangeOverlapping: When the 
> RLESparseResourceAllocation object is a result of a merge operation, the 
> underlying map is a "view" within some range. If  'end' is outside that 
> range, headMap(..) throws an uncaught exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-4525:
---
Issue Type: Sub-task  (was: Bug)
Parent: YARN-2572

> Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
> ---
>
> Key: YARN-4525
> URL: https://issues.apache.org/jira/browse/YARN-4525
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ishai Menache
>Assignee: Ishai Menache
> Attachments: YARN-4525.patch
>
>
> One of our tests detected a corner case in getRangeOverlapping: When the 
> RLESparseResourceAllocation object is a result of a merge operation, the 
> underlying map is a "view" within some range. If  'end' is outside that 
> range, headMap(..) throws an uncaught exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-31 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4887:
-
Attachment: YARN-4887-v4.patch

Uploading new patch (v4) that addresses the interface stability concerns of 
YARN APIs brought up in the [mail 
thread|http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201605.mbox/%3CCADbBEnv%3DyfmqfSgPH1V14A86mhcGnN2Na9UbRaVFSjw5HEzLFg%40mail.gmail.com%3E]

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch, 
> YARN-4887-v3.patch, YARN-4887-v4.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-31 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308966#comment-15308966
 ] 

Xuan Gong commented on YARN-5088:
-

Thanks for the review. [~vvasudev]

bq. It looks like there's a bunch of common code between readContainerLogs, 
readContainerLogsForALogType, AHSWebServices::StreamingOutput::write, In 
NMWebServices::StreamingOutput::write - can we re-factor the existing and clean 
it up?

Yes, I realized that. But would like to fix them separately. We have already 
filed a separate ticket https://issues.apache.org/jira/browse/YARN-4993 for the 
refactory work.

Attached a new patch to address other comments.

> Improve "yarn log" command-line to read the last K bytes for the log files
> --
>
> Key: YARN-5088
> URL: https://issues.apache.org/jira/browse/YARN-5088
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5088.1.patch, YARN-5088.2.patch, YARN-5088.3.patch, 
> YARN-5088.4.patch, YARN-5088.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1815) Work preserving recovery of Unmanged AMs

2016-05-31 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308937#comment-15308937
 ] 

Subru Krishnan commented on YARN-1815:
--

The test failures seem unrelated as verified that all of them pass locally.

> Work preserving recovery of Unmanged AMs
> 
>
> Key: YARN-1815
> URL: https://issues.apache.org/jira/browse/YARN-1815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Subru Krishnan
>Priority: Critical
> Attachments: Unmanaged AM recovery.png, YARN-1815-v3.patch, 
> YARN-1815-v4.patch, yarn-1815-1.patch, yarn-1815-2.patch, yarn-1815-2.patch
>
>
> Currently work preserving RM restart recovers unmanaged AMs but it has a 
> couple of shortcomings - all running containers are killed and completed 
> unmanaged AMs are also recovered as we do _not_ record final state for 
> unmanaged AMs in the RM StateStore. This JIRA proposes to address both the 
> shortcomings so that work preserving unmanaged AM recovery works exactly like 
> with managed AMs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1815) Work preserving recovery of Unmanged AMs

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308877#comment-15308877
 ] 

Hadoop QA commented on YARN-1815:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 334 unchanged - 2 fixed = 334 total (was 336) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 0s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807278/YARN-1815-v4.patch |
| JIRA Issue | YARN-1815 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9912a17564fc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bca31fe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11776/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11776/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11776/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-projec

[jira] [Updated] (YARN-1815) Work preserving recovery of Unmanged AMs

2016-05-31 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-1815:
-
Attachment: YARN-1815-v4.patch

PFA updated patch (v4) with fix for *TestRMAppAttemptTransitions*. All other 
test failures are unrelated (tracked as part of HADOOP-12687 & YARN-5091), 
verified that they pass locally.

> Work preserving recovery of Unmanged AMs
> 
>
> Key: YARN-1815
> URL: https://issues.apache.org/jira/browse/YARN-1815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Subru Krishnan
>Priority: Critical
> Attachments: Unmanaged AM recovery.png, YARN-1815-v3.patch, 
> YARN-1815-v4.patch, yarn-1815-1.patch, yarn-1815-2.patch, yarn-1815-2.patch
>
>
> Currently work preserving RM restart recovers unmanaged AMs but it has a 
> couple of shortcomings - all running containers are killed and completed 
> unmanaged AMs are also recovered as we do _not_ record final state for 
> unmanaged AMs in the RM StateStore. This JIRA proposes to address both the 
> shortcomings so that work preserving unmanaged AM recovery works exactly like 
> with managed AMs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308784#comment-15308784
 ] 

Hadoop QA commented on YARN-5088:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 7 
new + 140 unchanged - 3 fixed = 147 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
51s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 12 new + 4567 unchanged - 0 fixed = 4579 total (was 4567) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 280 unchanged - 0 fixed = 281 total (was 280) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 47s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 10s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 110m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.yarn.client.TestGetGroups |
|   | hadoop.yarn.client.cli.TestLogsCLI |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
| Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient

[jira] [Commented] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-31 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308773#comment-15308773
 ] 

Subru Krishnan commented on YARN-4887:
--

As mentioned earlier, the patch has no test case updates as it is limited to 
API changes.

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch, 
> YARN-4887-v3.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308760#comment-15308760
 ] 

Hadoop QA commented on YARN-4887:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807266/YARN-4887-v3.patch |
| JIRA Issue | YARN-4887 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 987d353b2ddf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bca31fe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11775/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Buil

[jira] [Commented] (YARN-5180) Add ensureExecutionType boolean flag in ResourceRequest

2016-05-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308759#comment-15308759
 ] 

Karthik Kambatla commented on YARN-5180:


Thanks for working on this, Arun. 

Comments:
# Would ExecutionTypeRequest be easier to understand for end-users compared to 
ExecutionSpec or ExecutionTypeSpec.
# ResourceRequest: We missed this previously. The annotation for the the 
constructor that takes this ExecutionTypeRequest should be Public-Evolving 
until we are sure it is stable. 
# Also, since ResourceRequest is a Public-Stable class, let us mark the two 
abstract methods non-abstract. It is okay to throw 
UnsupportedOperationException, if there is no obvious implementation to 
maintain compatibility. 
# Nit: ResourceRequestPBImpl: Two new lines between setPriority and 
getExecutionSpeck
# Now that the default is GUARANTEED and it is turned off by default, do we 
need to specify the ExecutionType in MR? 
# Should we just mark ExecutionTypeRequest class Public-Evolving and not bother 
marking the methods? I hope it is okay to expose the newInstance method as 
Public-Evolving? 

[~asuresh], [~kkaranasos] - By the way, I had filed YARN-5184 to fix a few of 
the compatibility issues introduced by YARN-2882. Can you help identify other 
potential such incompatibility issues for the OPPORTUNISTIC container APIs? 

> Add ensureExecutionType boolean flag in ResourceRequest
> ---
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-31 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4887:
-
Attachment: YARN-4887-v3.patch

Attaching rebased patch now that YARN-5132 is resolved.

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch, 
> YARN-4887-v3.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308720#comment-15308720
 ] 

Hadoop QA commented on YARN-5185:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 51s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 2s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807226/YARN-5185.1.patch |
| JIRA Issue | YARN-5185 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  mvnsite  unit  findbugs  checkstyle  |
| uname | Linux d9d576ad976d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/p

[jira] [Commented] (YARN-5165) NoOvercommitPolicy does not take advantage of RLE representation of plan

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308714#comment-15308714
 ] 

Hadoop QA commented on YARN-5165:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
0s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 56s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 6m 0s 
{color} | {color:red} root: The patch generated 4 new + 19766 unchanged - 0 
fixed = 19770 total (was 19766) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 101m 29s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 174m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Dead store to p in 
org.apache.hadoop.yarn.server.resourcemanager.reservation.NoOverCommitPolicy.validate(Plan,
 ReservationAllocation)  At 
NoOverCommitPolicy.java:org.apache.hadoop.yarn.server.resourcemanager.reservation.NoOverCommitPolicy.validate(Plan,
 ReservationAllocation)  At NoOverCommitPolicy.java:[line 62] |
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:

[jira] [Commented] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-31 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308646#comment-15308646
 ] 

Carlo Curino commented on YARN-5164:


I made this policy to extend {{NoOverCommitPolicy}} so that we can reuse code 
to validating for physical cluster constraints and user-match.
Also the {{TestCapacityOvertimePolicy}} has been made to match the new type of 
exception (more specific), coming from the super invocation. 
This patch might depend on YARN-5164 and YARN-5185 (I have tested them together 
locally). 


> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164.1.patch, 
> YARN-5164.2.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5164:
---
Attachment: YARN-5164.2.patch

> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164.1.patch, 
> YARN-5164.2.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5174) several updates/corrections to timeline service documentation

2016-05-31 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308634#comment-15308634
 ] 

Sangjin Lee commented on YARN-5174:
---

Slightly broadening the scope of this ticket to include the following items:
- change the left nav
- RM container event publishing should be disabled by default
- default reader/writer implementation
- timeline client registering address
- some documentation on the entity model

> several updates/corrections to timeline service documentation
> -
>
> Key: YARN-5174
> URL: https://issues.apache.org/jira/browse/YARN-5174
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
>
> One part that is missing in the documentation is the need to add 
> {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we 
> need to arrive at the minimally required client setting to connect to the 
> right hbase cluster. Then, we need to document it so that users know exactly 
> what to do to configure the cluster to use the timeline service v.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5174) several updates/corrections to timeline service documentation

2016-05-31 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5174:
--
Summary: several updates/corrections to timeline service documentation  
(was: add documentation on needing to add hbase-site.xml on YARN cluster)

> several updates/corrections to timeline service documentation
> -
>
> Key: YARN-5174
> URL: https://issues.apache.org/jira/browse/YARN-5174
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
>
> One part that is missing in the documentation is the need to add 
> {{hbase-site.xml}} on the client side (the client hadoop cluster). First, we 
> need to arrive at the minimally required client setting to connect to the 
> right hbase cluster. Then, we need to document it so that users know exactly 
> what to do to configure the cluster to use the timeline service v.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5137) Make DiskChecker pluggable

2016-05-31 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang reopened YARN-5137:
--

Reopening after 
[discussion|https://issues.apache.org/jira/browse/YARN-4271?focusedCommentId=15305017&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15305017]
 on YARN-4271. 

> Make DiskChecker pluggable
> --
>
> Key: YARN-5137
> URL: https://issues.apache.org/jira/browse/YARN-5137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
>
> It would be nice to have the option for a DiskChecker that has more 
> sophisticated checking capabilities.  In order to do this, we would first 
> need DiskChecker to be pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4271) Make the NodeManager's health checker service pluggable

2016-05-31 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308623#comment-15308623
 ] 

Ray Chiang commented on YARN-4271:
--

Thanks [~subru] and [~rmohan].  I'll reopen YARN-5137 and we can continue the 
DiskChecker modifications there.

> Make the NodeManager's health checker service pluggable
> ---
>
> Key: YARN-4271
> URL: https://issues.apache.org/jira/browse/YARN-4271
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Subru Krishnan
>Assignee: Raghav Mohan
>Priority: Minor
>
> This JIRA proposes making the NodeHealthCheckerService in the NM pluggable as 
> in cloud environments like Azure we want to tap into the provided health 
> checkers for disk and other service signal statuses. The idea is to extend 
> the existing NodeHealthCheckerService and hook in custom implementation to 
> evaluate if the node is healthy or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5088:

Attachment: YARN-5088.5.patch

> Improve "yarn log" command-line to read the last K bytes for the log files
> --
>
> Key: YARN-5088
> URL: https://issues.apache.org/jira/browse/YARN-5088
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5088.1.patch, YARN-5088.2.patch, YARN-5088.3.patch, 
> YARN-5088.4.patch, YARN-5088.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-31 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308529#comment-15308529
 ] 

Sangjin Lee commented on YARN-5105:
---

Committed the patch (v.05). Thanks [~varun_saxena] for your contribution, and 
[~jrottinghuis] for your review!

> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Fix For: YARN-2928
>
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch, YARN-5105-YARN-2928.05.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5098) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token

2016-05-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308513#comment-15308513
 ] 

Jian He commented on YARN-5098:
---

TestClientToAMTokens  is tracked by YARN-5163
The other tests are related to HADOOP-12687 and passing locally

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token
> ---
>
> Key: YARN-5098
> URL: https://issues.apache.org/jira/browse/YARN-5098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-5098.1.patch, YARN-5098.1.patch
>
>
> Environment : HA cluster
> Yarn application logs for long running application could not be gathered 
> because Nodemanager failed to talk to HDFS with below error.
> {code}
> 2016-05-16 18:18:28,533 INFO  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:finishLogAggregation(555)) - Application just 
> finished : application_1463170334122_0002
> 2016-05-16 18:18:28,545 WARN  ipc.Client (Client.java:run(705)) - Exception 
> encountered while connecting to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 171 for hrt_qa) can't be found in cache
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:583)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:398)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:752)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:748)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:747)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:398)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1597)
> at org.apache.hadoop.ipc.Client.call(Client.java:1439)
> at org.apache.hadoop.ipc.Client.call(Client.java:1386)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:240)
> at com.sun.proxy.$Proxy83.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy84.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:1018)
> at org.apache.hadoop.fs.Hdfs.getServerDefaults(Hdfs.java:156)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:550)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:687)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5132) Exclude generated protobuf sources from YARN Javadoc build

2016-05-31 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5132:
-
Target Version/s:   (was: 2.8.1)

> Exclude generated protobuf sources from YARN Javadoc build
> --
>
> Key: YARN-5132
> URL: https://issues.apache.org/jira/browse/YARN-5132
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-5132-v1.patch
>
>
> Currently YARN build includes Javadoc from generated protobuf sources which 
> is causing CI to fail. This JIRA proposes to exclude generated protobuf 
> sources from YARN Javadoc build



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5132) Exclude generated protobuf sources from YARN Javadoc build

2016-05-31 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308435#comment-15308435
 ] 

Subru Krishnan commented on YARN-5132:
--

Thanks [~chris.douglas] for reviewing and [~kasha] for reviewing/committing!

> Exclude generated protobuf sources from YARN Javadoc build
> --
>
> Key: YARN-5132
> URL: https://issues.apache.org/jira/browse/YARN-5132
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-5132-v1.patch
>
>
> Currently YARN build includes Javadoc from generated protobuf sources which 
> is causing CI to fail. This JIRA proposes to exclude generated protobuf 
> sources from YARN Javadoc build



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5165) NoOvercommitPolicy does not take advantage of RLE representation of plan

2016-05-31 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308348#comment-15308348
 ] 

Carlo Curino commented on YARN-5165:


The fix in {{TestSimpleCapacityReplanner}} is due to the fact that the previous 
policy was relying on the {{DefaultResourceCalculator}} (which ignores 
{{vcores}} in the calculations), and so we had missed a misconfiguration of the 
{{vcores}} in the inital plan. The test should have failed. The new code 
proposed in this patch, caught the issue, hence the test fix.

> NoOvercommitPolicy does not take advantage of RLE representation of plan
> 
>
> Key: YARN-5165
> URL: https://issues.apache.org/jira/browse/YARN-5165
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5165.1.patch, YARN-5165.2.patch, YARN-5165.3.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5165) NoOvercommitPolicy does not take advantage of RLE representation of plan

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5165:
---
Attachment: YARN-5165.3.patch

> NoOvercommitPolicy does not take advantage of RLE representation of plan
> 
>
> Key: YARN-5165
> URL: https://issues.apache.org/jira/browse/YARN-5165
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5165.1.patch, YARN-5165.2.patch, YARN-5165.3.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308342#comment-15308342
 ] 

Hadoop QA commented on YARN-5105:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 30s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
6s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} YARN-2928 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 in YARN-2928 has 30 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 7 new + 17 unchanged - 7 fixed = 24 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 24s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 34s {color} 
| {color:red} hadoop-yarn-server-timelineservice in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 42s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in

[jira] [Updated] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5185:
---
Attachment: YARN-5185.1.patch

> StageAllocaterGreedyRLE: NPE in corner case 
> 
>
> Key: YARN-5185
> URL: https://issues.apache.org/jira/browse/YARN-5185
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5185.1.patch
>
>
> If the plan has only one interval, and the reservation exactly overlap we 
> will have a null from partialMap.higherKey() that we should guard against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5185:
---
Description: If the plan has only one interval, and the reservation exactly 
overlap we will have a null from partialMap.higherKey() that we should guard 
against.

> StageAllocaterGreedyRLE: NPE in corner case 
> 
>
> Key: YARN-5185
> URL: https://issues.apache.org/jira/browse/YARN-5185
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5185.1.patch
>
>
> If the plan has only one interval, and the reservation exactly overlap we 
> will have a null from partialMap.higherKey() that we should guard against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case

2016-05-31 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-5185:
--

 Summary: StageAllocaterGreedyRLE: NPE in corner case 
 Key: YARN-5185
 URL: https://issues.apache.org/jira/browse/YARN-5185
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5185) StageAllocaterGreedyRLE: NPE in corner case

2016-05-31 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-5185:
--

Assignee: Carlo Curino

> StageAllocaterGreedyRLE: NPE in corner case 
> 
>
> Key: YARN-5185
> URL: https://issues.apache.org/jira/browse/YARN-5185
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5180) Add ensureExecutionType boolean flag in ResourceRequest

2016-05-31 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308318#comment-15308318
 ] 

Konstantinos Karanasos commented on YARN-5180:
--

Thank you for the patch, [~asuresh].

Looks good to me. 

Just two minor points:
# Let's rename {{ExecutionSpec}} to {{ExecutionTypeSpec}} to make its 
functionality more clear.
# Make a {{ExecutionSpec::newInstance()}} method, so that we can use it in the 
{{ResourceRequest}} constructor.


> Add ensureExecutionType boolean flag in ResourceRequest
> ---
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-31 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308298#comment-15308298
 ] 

Sangjin Lee commented on YARN-5105:
---

The latest patch LGTM. I'll commit it once the jenkins comes back clean.

> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch, YARN-5105-YARN-2928.05.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308283#comment-15308283
 ] 

Hadoop QA commented on YARN-4757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 47s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
31s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 29s {color} 
| {color:red} root generated 1 new + 696 unchanged - 0 fixed = 697 total (was 
696) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 24s 
{color} | {color:red} root: The patch generated 1 new + 2 unchanged - 46 fixed 
= 3 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 12 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry |
|  |  Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int)  
At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int)  
At RegistryDNS.java:[line 724] |
|  |  Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable)

[jira] [Comment Edited] (YARN-5156) YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state

2016-05-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308259#comment-15308259
 ] 

Varun Saxena edited comment on YARN-5156 at 5/31/16 6:13 PM:
-

Looked at the code. NMTimelinePublisher publishes the YARN_CONTAINER_FINISHED 
event on ApplicationContainerFinishedEvent.
And this event is posted from ContainerImpl.

The issue here seems to be that we are cloning the container status and posting 
a ApplicationContainerFinishedEvent before the transition has completed and 
container state has been set to DONE. This means the container state is 
reported as RUNNING. ContainerImpl#sendFinishedEvents which posts a 
ApplicationContainerFinishedEvent is called from all those transitions which 
would lead the state to be changed to DONE. So in 
NMTimelinePublisher#publishContainerFinishedEvent we can simply set 
STATE_EVENT_INFO as DONE.
Or as we know that container finished event would always lead to a state of 
DONE, no need to send STATE_EVENT_INFO at all. Thoughts ?

{code:title=ContainerImpl.java|borderStyle=solid}
  @SuppressWarnings("unchecked")
  private void sendFinishedEvents() {
// Inform the application
@SuppressWarnings("rawtypes")
EventHandler eventHandler = dispatcher.getEventHandler();

ContainerStatus containerStatus = cloneAndGetContainerStatus();
eventHandler.handle(new ApplicationContainerFinishedEvent(containerStatus));

// Remove the container from the resource-monitor
eventHandler.handle(new ContainerStopMonitoringEvent(containerId));
// Tell the logService too
eventHandler.handle(new LogHandlerContainerFinishedEvent(
  containerId, exitCode));
  }
{code}

Naga, you will be handling this ?


was (Author: varun_saxena):
Looked at the code. NMTimelinePublisher publishes the YARN_CONTAINER_FINISHED 
event on ApplicationContainerFinishedEvent.
And this event is posted from ContainerImpl.

The issue here seems to be that we are cloning the container status and posting 
a ApplicationContainerFinishedEvent before the transition has completed and 
container state has been set to DONE. This means the container state is 
reported as RUNNING. ContainerImpl#sendFinishedEvents which posts a 
ApplicationContainerFinishedEvent is called from all those transitions which 
would lead the state to be changed to DONE. So in 
NMTimelinePublisher#publishContainerFinishedEvent we can simply set 
STATE_EVENT_INFO as DONE.
Or when we know that container finished event would lead to a state of DONE, no 
need to send STATE_EVENT_INFO at all. Thoughts ?

{code:title=ContainerImpl.java|borderStyle=solid}
  @SuppressWarnings("unchecked")
  private void sendFinishedEvents() {
// Inform the application
@SuppressWarnings("rawtypes")
EventHandler eventHandler = dispatcher.getEventHandler();

ContainerStatus containerStatus = cloneAndGetContainerStatus();
eventHandler.handle(new ApplicationContainerFinishedEvent(containerStatus));

// Remove the container from the resource-monitor
eventHandler.handle(new ContainerStopMonitoringEvent(containerId));
// Tell the logService too
eventHandler.handle(new LogHandlerContainerFinishedEvent(
  containerId, exitCode));
  }
{code}

Naga, you will be handling this ?

> YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state
> -
>
> Key: YARN-5156
> URL: https://issues.apache.org/jira/browse/YARN-5156
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>
> On container finished, we're reporting "YARN_CONTAINER_STATE: "RUNNING"". Do 
> we design this deliberately or it's a bug? 
> {code}
> {
> metrics: [ ],
> events: [
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: 1464213765890,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: 1464213761133,
> info: { }
> },
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: 1464213761132,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: 1464213761132,
> info: { }
> }
> ],
> id: "container_e15_1464213707405_0001_01_18",
> type: "YARN_CONTAINER",
> createdtime: 1464213761132,
> info: {
> YARN_CONTAINER_ALLOCATED_PRIORITY: "20",
> YARN_CONTAINER_ALLOCATED_VCORE: 1,
> YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS: "10.22.16.164:0",
> UID: 
> "yarn_cluster!application_1464213707405_0001!YARN_CONTAINER!container_e15_1464213707405_0001_01_18",
> YARN_CONTAINER_ALLOCATED_HOST: "10.22.16.164",
> YARN_CONTAINER_ALLOCATED_MEMORY: 1024,
> SYSTEM_INFO_PARENT_ENTITY: {
> type: "YARN_APPLICATION_ATTEMPT",
> id: "appattempt_1464213707405_0001_01"
> },
> YARN_CONTAIN

[jira] [Commented] (YARN-5156) YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state

2016-05-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308259#comment-15308259
 ] 

Varun Saxena commented on YARN-5156:


Looked at the code. NMTimelinePublisher publishes the YARN_CONTAINER_FINISHED 
event on ApplicationContainerFinishedEvent.
And this event is posted from ContainerImpl.

The issue here seems to be that we are cloning the container status and posting 
a ApplicationContainerFinishedEvent before the transition has completed and 
container state has been set to DONE. This means the container state is 
reported as RUNNING. ContainerImpl#sendFinishedEvents which posts a 
ApplicationContainerFinishedEvent is called from all those transitions which 
would lead the state to be changed to DONE. So in 
NMTimelinePublisher#publishContainerFinishedEvent we can simply set 
STATE_EVENT_INFO as DONE.
Or when we know that container finished event would lead to a state of DONE, no 
need to send STATE_EVENT_INFO at all. Thoughts ?

{code:title=ContainerImpl.java|borderStyle=solid}
  @SuppressWarnings("unchecked")
  private void sendFinishedEvents() {
// Inform the application
@SuppressWarnings("rawtypes")
EventHandler eventHandler = dispatcher.getEventHandler();

ContainerStatus containerStatus = cloneAndGetContainerStatus();
eventHandler.handle(new ApplicationContainerFinishedEvent(containerStatus));

// Remove the container from the resource-monitor
eventHandler.handle(new ContainerStopMonitoringEvent(containerId));
// Tell the logService too
eventHandler.handle(new LogHandlerContainerFinishedEvent(
  containerId, exitCode));
  }
{code}

Naga, you will be handling this ?

> YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state
> -
>
> Key: YARN-5156
> URL: https://issues.apache.org/jira/browse/YARN-5156
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>
> On container finished, we're reporting "YARN_CONTAINER_STATE: "RUNNING"". Do 
> we design this deliberately or it's a bug? 
> {code}
> {
> metrics: [ ],
> events: [
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: 1464213765890,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: 1464213761133,
> info: { }
> },
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: 1464213761132,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: 1464213761132,
> info: { }
> }
> ],
> id: "container_e15_1464213707405_0001_01_18",
> type: "YARN_CONTAINER",
> createdtime: 1464213761132,
> info: {
> YARN_CONTAINER_ALLOCATED_PRIORITY: "20",
> YARN_CONTAINER_ALLOCATED_VCORE: 1,
> YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS: "10.22.16.164:0",
> UID: 
> "yarn_cluster!application_1464213707405_0001!YARN_CONTAINER!container_e15_1464213707405_0001_01_18",
> YARN_CONTAINER_ALLOCATED_HOST: "10.22.16.164",
> YARN_CONTAINER_ALLOCATED_MEMORY: 1024,
> SYSTEM_INFO_PARENT_ENTITY: {
> type: "YARN_APPLICATION_ATTEMPT",
> id: "appattempt_1464213707405_0001_01"
> },
> YARN_CONTAINER_ALLOCATED_PORT: 64694
> },
> configs: { },
> isrelatedto: { },
> relatesto: { }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4830) Add support for resource types in the nodemanager

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308258#comment-15308258
 ] 

Hadoop QA commented on YARN-4830:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
5s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s 
{color} | {color:green} YARN-3926 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s 
{color} | {color:green} YARN-3926 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
58s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} YARN-3926 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 14s 
{color} | {color:green} YARN-3926 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 
new + 269 unchanged - 2 fixed = 271 total (was 271) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 59s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{c

[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0

2016-05-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308238#comment-15308238
 ] 

Karthik Kambatla commented on YARN-5077:


My bad again. I should probably take some time off. {{maxShare}} will be 
Integer.MAX_VALUE, but that is also an issue. What happens if the cluster 
resources are smaller than maxShare? Wouldn't we run into the same livelock 
issue maxAMShare was meant to solve? 

Given the number of issues surrounding this code, I wonder if there is a 
fundamental issue here that needs a more comprehensive look. 

> Fix FSLeafQueue#getFairShare() for queues with weight 0.0
> -
>
> Key: YARN-5077
> URL: https://issues.apache.org/jira/browse/YARN-5077
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5077.001.patch, YARN-5077.002.patch, 
> YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, 
> YARN-5077.006.patch
>
>
> 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns 
>  
> 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns 
> 
> In case 1), that means no container ever gets allocated for an AM because 
> from the viewpoint of the RM, there is never any headroom to allocate a 
> container on that queue.
> For example, we have a pool with the following weights: 
> - root.dev 0.0 
> - root.product 1.0
> The root.dev is a best effort pool and should only get resources if 
> root.product is not running. In our tests, with no jobs running under 
> root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and 
> never start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5180) Add ensureExecutionType boolean flag in ResourceRequest

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308221#comment-15308221
 ] 

Hadoop QA commented on YARN-5180:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 21s 
{color} | {color:red} root: The patch generated 1 new + 107 unchanged - 0 fixed 
= 108 total (was 107) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 8s {color} | 
{color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 51s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 10s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 5s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 32s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 2s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.api.TestPBImplRecords |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.yarn.cli

[jira] [Updated] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-05-31 Thread Jonathan Maron (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Maron updated YARN-4757:
-
Attachment: YARN-4757-YARN-4757.003.patch

> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- 
> Simplified discovery of services via DNS mechanisms.pdf, 
> YARN-4757-YARN-4757.001.patch, YARN-4757-YARN-4757.002.patch, 
> YARN-4757-YARN-4757.003.patch
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-31 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5105:
---
Attachment: YARN-5105-YARN-2928.05.patch

> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch, YARN-5105-YARN-2928.05.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0

2016-05-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308109#comment-15308109
 ] 

Yufei Gu commented on YARN-5077:


[~kasha], thanks for the review.

IIUC, if a queue is not configured maxResources, {{maxShare}} will be the 
Integer.MAX_VALUE. Am I missing something?

> Fix FSLeafQueue#getFairShare() for queues with weight 0.0
> -
>
> Key: YARN-5077
> URL: https://issues.apache.org/jira/browse/YARN-5077
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5077.001.patch, YARN-5077.002.patch, 
> YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, 
> YARN-5077.006.patch
>
>
> 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns 
>  
> 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns 
> 
> In case 1), that means no container ever gets allocated for an AM because 
> from the viewpoint of the RM, there is never any headroom to allocate a 
> container on that queue.
> For example, we have a pool with the following weights: 
> - root.dev 0.0 
> - root.product 1.0
> The root.dev is a best effort pool and should only get resources if 
> root.product is not running. In our tests, with no jobs running under 
> root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and 
> never start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5184) Fix up incompatible changes introduced in YARN-2882

2016-05-31 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5184:
--

 Summary: Fix up incompatible changes introduced in YARN-2882
 Key: YARN-5184
 URL: https://issues.apache.org/jira/browse/YARN-5184
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api
Affects Versions: 2.9.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker


YARN-2882 broke compatibility by adding abstract methods to a Public-Stable 
class - ContainerStatus. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4493) move queue can make app don't belong to any queue

2016-05-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308016#comment-15308016
 ] 

Yufei Gu commented on YARN-4493:


Hi [~jiangyu1211], are you still working on this? Can I pick it up If not?

> move queue can make app don't belong to any queue
> -
>
> Key: YARN-4493
> URL: https://issues.apache.org/jira/browse/YARN-4493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.4.0, 2.6.0, 2.7.1
>Reporter: jiangyu
>Priority: Minor
> Attachments: yarn-4493.patch.1
>
>
> When moving a running application to a different queue, the current implement 
> don't check if the app can run in the new queue before remove it from current 
> queue. So if the destination queue is full, the app will throw exception, and 
> don't belong to any queue.
> After that, the queue become orphane, can not schedule any resources. If you 
> kill the app,  the removeApp method in FSLeafQueue will throw 
> IllealStateException of "Given app to remove app does not exist in queue ..." 
> exception.   
> So i think we should check if the destination queue can run the app before 
> remove it from the current queue.  
> The patch is from our revision.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307999#comment-15307999
 ] 

Hadoop QA commented on YARN-5159:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests: 
The patch generated 2 new + 21 unchanged - 0 fixed = 23 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 18s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807185/YARN-5159.01.patch |
| JIRA Issue | YARN-5159 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 940f1e9cbece 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93d8a7f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11769/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11769/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11769/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11769/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ser

[jira] [Commented] (YARN-5007) MiniYarnCluster contains deprecated constructor which is called by the other constructors

2016-05-31 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307929#comment-15307929
 ] 

Andras Bokor commented on YARN-5007:


Neither the checkstyle nor the junit failure is related.

> MiniYarnCluster contains deprecated constructor which is called by the other 
> constructors
> -
>
> Key: YARN-5007
> URL: https://issues.apache.org/jira/browse/YARN-5007
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: timelineserver
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-5007.01.patch
>
>
> MiniYarnCluster has a deprecated constructor which is called by the other 
> constructors and it causes javac warnings during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-05-31 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-5159:
---
Attachment: YARN-5159.01.patch

> Wrong Javadoc tag in MiniYarnCluster
> 
>
> Key: YARN-5159
> URL: https://issues.apache.org/jira/browse/YARN-5159
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: timelineserver
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-5159.01.patch
>
>
> {@YarnConfiguration.RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME} is wrong. Should 
> be changed to 
>  {@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}
> Edit:
> I noted that due to java 8 javadoc restrictions the javadoc:test-javadoc goal 
> fails on hadoop-yarn-server-tests project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-05-31 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-5159:
---
Attachment: (was: YARN-5007.01.patch)

> Wrong Javadoc tag in MiniYarnCluster
> 
>
> Key: YARN-5159
> URL: https://issues.apache.org/jira/browse/YARN-5159
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: timelineserver
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.8.0
>
>
> {@YarnConfiguration.RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME} is wrong. Should 
> be changed to 
>  {@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}
> Edit:
> I noted that due to java 8 javadoc restrictions the javadoc:test-javadoc goal 
> fails on hadoop-yarn-server-tests project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307882#comment-15307882
 ] 

Hadoop QA commented on YARN-5159:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} root generated 0 new + 694 unchanged - 4 fixed = 694 
total (was 698) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 21s 
{color} | {color:red} root: The patch generated 1 new + 60 unchanged - 2 fixed 
= 61 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 18s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 27s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 118m 45s 
{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 211m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
|   | hadoop.yarn.client.api.impl.TestAMRMClient |
|   | hadoop.yarn.client.TestGetGroups |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.mapred.TestMiniMRChildTask |
| Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
|   | org.apache.hadoop.mapred.TestMRIntermediateDataEncryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807125/YARN-5007.01.patch |
| JIRA Issue | YARN-5159 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  m

[jira] [Updated] (YARN-5180) Add ensureExecutionType boolean flag in ResourceRequest

2016-05-31 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5180:
--
Attachment: YARN-5180.002.patch

Updating patch.

Based on discussions with [~kasha], Replaced the {{ExecutionType}} in the 
resource request with a new type : {{ExecutionSpec}}
This new type contains both the {{ExecutionType}} as well as a boolean field 
{{ensureExecutionType}} used by the AM to signal to the Scheduler if it is fine 
receiving a Container of a different ExecutionType than what is asked

> Add ensureExecutionType boolean flag in ResourceRequest
> ---
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307823#comment-15307823
 ] 

Hadoop QA commented on YARN-4757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 9s {color} 
| {color:red} root generated 1 new + 696 unchanged - 0 fixed = 697 total (was 
696) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s 
{color} | {color:red} root: The patch generated 18 new + 1 unchanged - 46 fixed 
= 19 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 12 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 40 new + 52 unchanged - 0 fixed = 92 total (was 52) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 59s 
{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry |
|  |  Exceptional return value of 
java.util.concurrent.ExecutorService.submit(Callable) ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int)  
At RegistryDNS.java:ignored in 
org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int)  
At RegistryDNS.java:[line 727] |
|  |  Except

[jira] [Updated] (YARN-4830) Add support for resource types in the nodemanager

2016-05-31 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-4830:

Attachment: YARN-4830-YARN-3926.004.patch

{noformat}
 ResourceUtils::addResourcesFileToConf : add a finally to close the input 
stream

Fixed.
{noformat}

Remembered why I didn't close the stream in the function - it's used by the 
conf object. Closing the stream leads to failed reads later on.

> Add support for resource types in the nodemanager
> -
>
> Key: YARN-4830
> URL: https://issues.apache.org/jira/browse/YARN-4830
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-4830-YARN-3926.001.patch, 
> YARN-4830-YARN-3926.002.patch, YARN-4830-YARN-3926.003.patch, 
> YARN-4830-YARN-3926.004.patch
>
>
> The RM has support for multiple resource types. The same should be added for 
> the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-05-31 Thread Jonathan Maron (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Maron updated YARN-4757:
-
Attachment: YARN-4757-YARN-4757.002.patch

Updated patch with minor fixes and more testing

> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- 
> Simplified discovery of services via DNS mechanisms.pdf, 
> YARN-4757-YARN-4757.001.patch, YARN-4757-YARN-4757.002.patch
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5183) [YARN-3368] Support responsive navbar in case of resized

2016-05-31 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5183:
-
Attachment: Screen Shot 2016-05-31 at 22.41.35.png

> [YARN-3368] Support responsive navbar in case of resized
> 
>
> Key: YARN-5183
> URL: https://issues.apache.org/jira/browse/YARN-5183
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-05-31 at 22.41.35.png, 
> YARN-5183.01.patch
>
>
> Responsive navbar currently not work even navbar icon is shownup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5183) [YARN-3368] Support responsive navbar in case of resized

2016-05-31 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5183:
-
Attachment: YARN-5183.01.patch

> [YARN-3368] Support responsive navbar in case of resized
> 
>
> Key: YARN-5183
> URL: https://issues.apache.org/jira/browse/YARN-5183
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-5183.01.patch
>
>
> Responsive navbar currently not work even navbar icon is shownup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5183) [YARN-3368] Support responsive navbar in case of resized

2016-05-31 Thread Kai Sasaki (JIRA)
Kai Sasaki created YARN-5183:


 Summary: [YARN-3368] Support responsive navbar in case of resized
 Key: YARN-5183
 URL: https://issues.apache.org/jira/browse/YARN-5183
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Kai Sasaki
Assignee: Kai Sasaki


Responsive navbar currently not work even navbar icon is shownup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5161) [YARN-3368] Add Apache Hadoop logo to UI home page

2016-05-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307727#comment-15307727
 ] 

Sunil G commented on YARN-5161:
---

[~kaisasak], Sorry for the confusion. The hadoop logo which you have updated 
first itself is correct. I will now try the new patch which you have uploaded. 
Thank You.

> [YARN-3368] Add Apache Hadoop logo to UI home page
> --
>
> Key: YARN-5161
> URL: https://issues.apache.org/jira/browse/YARN-5161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-05-31 at 21.22.30.png, 
> YARN-5161.01.patch, YARN-5161.02.patch, apache_logo.png, hadoop_logo.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4830) Add support for resource types in the nodemanager

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307724#comment-15307724
 ] 

Hadoop QA commented on YARN-4830:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 56s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 9m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
15s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} YARN-3926 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s 
{color} | {color:green} YARN-3926 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
47s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} YARN-3926 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 4s 
{color} | {color:green} YARN-3926 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 269 unchanged - 2 fixed = 270 total (was 271) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 53s {color} 
| {color:red} hadoop-yarn-common in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 40s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 26s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color}

[jira] [Commented] (YARN-5172) Update yarn daemonlog documentation due to HADOOP-12847

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307664#comment-15307664
 ] 

Hadoop QA commented on YARN-5172:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 7m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807138/YARN-5172.001.patch |
| JIRA Issue | YARN-5172 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e8a96e4e5b09 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93d8a7f |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11765/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Update yarn daemonlog documentation due to HADOOP-12847
> ---
>
> Key: YARN-5172
> URL: https://issues.apache.org/jira/browse/YARN-5172
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: YARN-5172.001.patch
>
>
> In HADOOP-12847, I updated the hadoop command manual, but I did not notice 
> yarn has the same command that should also be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5007) MiniYarnCluster contains deprecated constructor which is called by the other constructors

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307658#comment-15307658
 ] 

Hadoop QA commented on YARN-5007:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 9s 
{color} | {color:green} root generated 0 new + 693 unchanged - 4 fixed = 693 
total (was 697) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 18s 
{color} | {color:red} root: The patch generated 1 new + 60 unchanged - 2 fixed 
= 61 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 16s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 6s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 108m 32s 
{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 214m 1s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.yarn.client.TestGetGroups |
|   | hadoop.mapred.TestMRCJCFileOutputCommitter |
|   | hadoop.mapred.TestMiniMRChildTask |
| Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807081/YARN-5007.01.patch |
| JIRA Issue | YARN-5007 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Updated] (YARN-5172) Update yarn daemonlog documentation due to HADOOP-12847

2016-05-31 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated YARN-5172:
--
Attachment: YARN-5172.001.patch

v01: Both yarn and hadoop daemonlog use the same LogLevel class, and I see no 
need to duplicate the description. De-duplicate the command manual by pointing 
to Hadoop Command Manual. 

> Update yarn daemonlog documentation due to HADOOP-12847
> ---
>
> Key: YARN-5172
> URL: https://issues.apache.org/jira/browse/YARN-5172
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: YARN-5172.001.patch
>
>
> In HADOOP-12847, I updated the hadoop command manual, but I did not notice 
> yarn has the same command that should also be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >