[jira] [Commented] (YARN-295) Resource Manager throws InvalidStateTransitonException: Invalid event: CONTAINER_FINISHED at ALLOCATED for RMAppAttemptImpl

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676776#comment-13676776
 ] 

Hadoop QA commented on YARN-295:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12586453/YARN-295-trunk-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1136//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1136//console

This message is automatically generated.

> Resource Manager throws InvalidStateTransitonException: Invalid event: 
> CONTAINER_FINISHED at ALLOCATED for RMAppAttemptImpl
> ---
>
> Key: YARN-295
> URL: https://issues.apache.org/jira/browse/YARN-295
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.0.2-alpha, 2.0.1-alpha
>Reporter: Devaraj K
>Assignee: Mayank Bansal
> Attachments: YARN-295-trunk-1.patch
>
>
> {code:xml}
> 2012-12-28 14:03:56,956 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> CONTAINER_FINISHED at ALLOCATED
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:490)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:80)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:433)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:414)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
>   at java.lang.Thread.run(Thread.java:662)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-295) Resource Manager throws InvalidStateTransitonException: Invalid event: CONTAINER_FINISHED at ALLOCATED for RMAppAttemptImpl

2013-06-05 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-295:
---

Attachment: YARN-295-trunk-1.patch

Attaching patch

Thanks,
Mayank

> Resource Manager throws InvalidStateTransitonException: Invalid event: 
> CONTAINER_FINISHED at ALLOCATED for RMAppAttemptImpl
> ---
>
> Key: YARN-295
> URL: https://issues.apache.org/jira/browse/YARN-295
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.0.2-alpha, 2.0.1-alpha
>Reporter: Devaraj K
>Assignee: Mayank Bansal
> Attachments: YARN-295-trunk-1.patch
>
>
> {code:xml}
> 2012-12-28 14:03:56,956 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> CONTAINER_FINISHED at ALLOCATED
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:490)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:80)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:433)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:414)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
>   at java.lang.Thread.run(Thread.java:662)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-767) Initialize Application status metrics when QueueMetrics is initialized

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676617#comment-13676617
 ] 

Hadoop QA commented on YARN-767:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586442/YARN-767.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1135//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1135//console

This message is automatically generated.

> Initialize Application status metrics  when QueueMetrics is initialized
> ---
>
> Key: YARN-767
> URL: https://issues.apache.org/jira/browse/YARN-767
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-767.1.patch
>
>
> Applications: ResourceManager.QueueMetrics.AppsSubmitted, 
> ResourceManager.QueueMetrics.AppsRunning, 
> ResourceManager.QueueMetrics.AppsPending, 
> ResourceManager.QueueMetrics.AppsCompleted, 
> ResourceManager.QueueMetrics.AppsKilled, 
> ResourceManager.QueueMetrics.AppsFailed
> For now these metrics are created only when they are needed, we want to make 
> them be seen when QueueMetrics is initialized

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676612#comment-13676612
 ] 

Hadoop QA commented on YARN-686:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586440/YARN-686-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1134//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1134//console

This message is automatically generated.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686-1.patch, YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-767) Initialize Application status metrics when QueueMetrics is initialized

2013-06-05 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-767:
-

Attachment: YARN-767.1.patch

Added a method for initializing these metrics when QueueMetrics is registered

> Initialize Application status metrics  when QueueMetrics is initialized
> ---
>
> Key: YARN-767
> URL: https://issues.apache.org/jira/browse/YARN-767
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-767.1.patch
>
>
> Applications: ResourceManager.QueueMetrics.AppsSubmitted, 
> ResourceManager.QueueMetrics.AppsRunning, 
> ResourceManager.QueueMetrics.AppsPending, 
> ResourceManager.QueueMetrics.AppsCompleted, 
> ResourceManager.QueueMetrics.AppsKilled, 
> ResourceManager.QueueMetrics.AppsFailed
> For now these metrics are created only when they are needed, we want to make 
> them be seen when QueueMetrics is initialized

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-767) Initialize Application status metrics when QueueMetrics is initialized

2013-06-05 Thread Jian He (JIRA)
Jian He created YARN-767:


 Summary: Initialize Application status metrics  when QueueMetrics 
is initialized
 Key: YARN-767
 URL: https://issues.apache.org/jira/browse/YARN-767
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He


Applications: ResourceManager.QueueMetrics.AppsSubmitted, 
ResourceManager.QueueMetrics.AppsRunning, 
ResourceManager.QueueMetrics.AppsPending, 
ResourceManager.QueueMetrics.AppsCompleted, 
ResourceManager.QueueMetrics.AppsKilled, ResourceManager.QueueMetrics.AppsFailed
For now these metrics are created only when they are needed, we want to make 
them be seen when QueueMetrics is initialized

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-764) blank Used Resources on Capacity Scheduler page

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676602#comment-13676602
 ] 

Hadoop QA commented on YARN-764:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586436/YARN-764.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1133//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1133//console

This message is automatically generated.

> blank Used Resources on Capacity Scheduler page 
> 
>
> Key: YARN-764
> URL: https://issues.apache.org/jira/browse/YARN-764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: nemon lou
>Assignee: nemon lou
> Attachments: YARN-764.patch, YARN-764.patch
>
>
> Even when there are jobs running,used resources is empty on Capacity 
> Scheduler page for leaf queue.(I use google-chrome on windows 7.)
> After changing resource.java's toString method by replacing "<>" with 
> "{}",this bug gets fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676597#comment-13676597
 ] 

Sandy Ryza commented on YARN-686:
-

Latest patch addresses the failing tests.  I also manually verified the changes 
to the web UI and command line.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686-1.patch, YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-686) Flatten NodeReport

2013-06-05 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-686:


Attachment: YARN-686-1.patch

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686-1.patch, YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-764) blank Used Resources on Capacity Scheduler page

2013-06-05 Thread nemon lou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nemon lou updated YARN-764:
---

Attachment: YARN-764.patch

Changing patch as Thomas suggested,escape HTML is a better way definitely

> blank Used Resources on Capacity Scheduler page 
> 
>
> Key: YARN-764
> URL: https://issues.apache.org/jira/browse/YARN-764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: nemon lou
>Assignee: nemon lou
> Attachments: YARN-764.patch, YARN-764.patch
>
>
> Even when there are jobs running,used resources is empty on Capacity 
> Scheduler page for leaf queue.(I use google-chrome on windows 7.)
> After changing resource.java's toString method by replacing "<>" with 
> "{}",this bug gets fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-117) Enhance YARN service model

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676538#comment-13676538
 ] 

Hadoop QA commented on YARN-117:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586423/YARN-117-020.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 26 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.yarn.client.TestNMClientAsync

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1132//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1132//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-client.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1132//console

This message is automatically generated.

> Enhance YARN service model
> --
>
> Key: YARN-117
> URL: https://issues.apache.org/jira/browse/YARN-117
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.0.4-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-117-007.patch, YARN-117-008.patch, 
> YARN-117-009.patch, YARN-117-010.patch, YARN-117-011.patch, 
> YARN-117-012.patch, YARN-117-013.patch, YARN-117-014.patch, 
> YARN-117-015.patch, YARN-117-016.patch, YARN-117-018.patch, 
> YARN-117-019.patch, YARN-117-020.patch, YARN-117-2.patch, YARN-117-3.patch, 
> YARN-117.4.patch, YARN-117.5.patch, YARN-117.6.patch, YARN-117.patch
>
>
> Having played the YARN service model, there are some issues
> that I've identified based on past work and initial use.
> This JIRA issue is an overall one to cover the issues, with solutions pushed 
> out to separate JIRAs.
> h2. state model prevents stopped state being entered if you could not 
> successfully start the service.
> In the current lifecycle you cannot stop a service unless it was successfully 
> started, but
> * {{init()}} may acquire resources that need to be explicitly released
> * if the {{start()}} operation fails partway through, the {{stop()}} 
> operation may be needed to release resources.
> *Fix:* make {{stop()}} a valid state transition from all states and require 
> the implementations to be able to stop safely without requiring all fields to 
> be non null.
> Before anyone points out that the {{stop()}} operations assume that all 
> fields are valid; and if called before a {{start()}} they will NPE; 
> MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
> for this. It is independent of the rest of the issues in this doc but it will 
> aid making {{stop()}} execute from all states other than "stopped".
> MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
> review and take up; this can be done with issues linked to this one.
> h2. AbstractService doesn't prevent duplicate state change requests.
> The {{ensureState()}} checks to verify whether or not a state transition is 
> allowed from the current state are performed in the base {{AbstractService}} 
> class -yet subclasses tend to call this *after* their own {{init()}}, 
> {{start()}} & {{stop()}} operations. This means that these operations can be 
> performed out of order, and even if the outcome of the call is an exception, 
> all actions performed by the subclasses will have taken place

[jira] [Commented] (YARN-530) Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676524#comment-13676524
 ] 

Hadoop QA commented on YARN-530:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586422/YARN-530-020.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1131//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1131//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1131//console

This message is automatically generated.

> Define Service model strictly, implement AbstractService for robust 
> subclassing, migrate yarn-common services
> -
>
> Key: YARN-530
> URL: https://issues.apache.org/jira/browse/YARN-530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.0.4-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-117-019.patch, YARN-117changes.pdf, 
> YARN-530-005.patch, YARN-530-008.patch, YARN-530-009.patch, 
> YARN-530-010.patch, YARN-530-011.patch, YARN-530-012.patch, 
> YARN-530-013.patch, YARN-530-014.patch, YARN-530-015.patch, 
> YARN-530-016.patch, YARN-530-017.patch, YARN-530-018.patch, 
> YARN-530-019.patch, YARN-530-020.patch, YARN-530-2.patch, YARN-530-3.patch, 
> YARN-530.4.patch, YARN-530.patch
>
>
> # Extend the YARN {{Service}} interface as discussed in YARN-117
> # Implement the changes in {{AbstractService}} and {{FilterService}}.
> # Migrate all services in yarn-common to the more robust service model, test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-530) Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services

2013-06-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-530:


Attachment: YARN-530-020.patch

There's no change from -19 except an extra line of logging in CompositeService

> Define Service model strictly, implement AbstractService for robust 
> subclassing, migrate yarn-common services
> -
>
> Key: YARN-530
> URL: https://issues.apache.org/jira/browse/YARN-530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.0.4-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-117-019.patch, YARN-117changes.pdf, 
> YARN-530-005.patch, YARN-530-008.patch, YARN-530-009.patch, 
> YARN-530-010.patch, YARN-530-011.patch, YARN-530-012.patch, 
> YARN-530-013.patch, YARN-530-014.patch, YARN-530-015.patch, 
> YARN-530-016.patch, YARN-530-017.patch, YARN-530-018.patch, 
> YARN-530-019.patch, YARN-530-020.patch, YARN-530-2.patch, YARN-530-3.patch, 
> YARN-530.4.patch, YARN-530.patch
>
>
> # Extend the YARN {{Service}} interface as discussed in YARN-117
> # Implement the changes in {{AbstractService}} and {{FilterService}}.
> # Migrate all services in yarn-common to the more robust service model, test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-117) Enhance YARN service model

2013-06-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-117:


Attachment: YARN-117-020.patch

as with YARN-530-020 There's no change from -19 except an extra line of logging 
in CompositeService

> Enhance YARN service model
> --
>
> Key: YARN-117
> URL: https://issues.apache.org/jira/browse/YARN-117
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.0.4-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-117-007.patch, YARN-117-008.patch, 
> YARN-117-009.patch, YARN-117-010.patch, YARN-117-011.patch, 
> YARN-117-012.patch, YARN-117-013.patch, YARN-117-014.patch, 
> YARN-117-015.patch, YARN-117-016.patch, YARN-117-018.patch, 
> YARN-117-019.patch, YARN-117-020.patch, YARN-117-2.patch, YARN-117-3.patch, 
> YARN-117.4.patch, YARN-117.5.patch, YARN-117.6.patch, YARN-117.patch
>
>
> Having played the YARN service model, there are some issues
> that I've identified based on past work and initial use.
> This JIRA issue is an overall one to cover the issues, with solutions pushed 
> out to separate JIRAs.
> h2. state model prevents stopped state being entered if you could not 
> successfully start the service.
> In the current lifecycle you cannot stop a service unless it was successfully 
> started, but
> * {{init()}} may acquire resources that need to be explicitly released
> * if the {{start()}} operation fails partway through, the {{stop()}} 
> operation may be needed to release resources.
> *Fix:* make {{stop()}} a valid state transition from all states and require 
> the implementations to be able to stop safely without requiring all fields to 
> be non null.
> Before anyone points out that the {{stop()}} operations assume that all 
> fields are valid; and if called before a {{start()}} they will NPE; 
> MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
> for this. It is independent of the rest of the issues in this doc but it will 
> aid making {{stop()}} execute from all states other than "stopped".
> MAPREDUCE-3502 is too big a patch and needs to be broken down for easier 
> review and take up; this can be done with issues linked to this one.
> h2. AbstractService doesn't prevent duplicate state change requests.
> The {{ensureState()}} checks to verify whether or not a state transition is 
> allowed from the current state are performed in the base {{AbstractService}} 
> class -yet subclasses tend to call this *after* their own {{init()}}, 
> {{start()}} & {{stop()}} operations. This means that these operations can be 
> performed out of order, and even if the outcome of the call is an exception, 
> all actions performed by the subclasses will have taken place. MAPREDUCE-3877 
> demonstrates this.
> This is a tricky one to address. In HADOOP-3128 I used a base class instead 
> of an interface and made the {{init()}}, {{start()}} & {{stop()}} methods 
> {{final}}. These methods would do the checks, and then invoke protected inner 
> methods, {{innerStart()}}, {{innerStop()}}, etc. It should be possible to 
> retrofit the same behaviour to everything that extends {{AbstractService}} 
> -something that must be done before the class is considered stable (because 
> once the lifecycle methods are declared final, all subclasses that are out of 
> the source tree will need fixing by the respective developers.
> h2. AbstractService state change doesn't defend against race conditions.
> There's no concurrency locks on the state transitions. Whatever fix for wrong 
> state calls is added should correct this to prevent re-entrancy, such as 
> {{stop()}} being called from two threads.
> h2.  Static methods to choreograph of lifecycle operations
> Helper methods to move things through lifecycles. init->start is common, 
> stop-if-service!=null another. Some static methods can execute these, and 
> even call {{stop()}} if {{init()}} raises an exception. These could go into a 
> class {{ServiceOps}} in the same package. These can be used by those services 
> that wrap other services, and help manage more robust shutdowns.
> h2. state transition failures are something that registered service listeners 
> may wish to be informed of.
> When a state transition fails a {{RuntimeException}} can be thrown -and the 
> service listeners are not informed as the notification point isn't reached. 
> They may wish to know this, especially for management and diagnostics.
> *Fix:* extend {{ServiceStateChangeListener}} with a callback such as 
> {{stateChangeFailed(Service service,Service.State targeted-state, 
> RuntimeException e)}} that is invoked from the (final) state change methods 
> in the {{AbstractService}} class (once they delegate to their inner 
> {{innerStart()}}, {{innerStop()}} methods; ma

[jira] [Commented] (YARN-737) ContainerManagerImpl can directly throw NMNotYetReadyException and InvalidContainerException after YARN-142

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676465#comment-13676465
 ] 

Hadoop QA commented on YARN-737:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586406/YARN-737.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1130//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1130//console

This message is automatically generated.

> ContainerManagerImpl can directly throw NMNotYetReadyException and 
> InvalidContainerException after YARN-142 
> 
>
> Key: YARN-737
> URL: https://issues.apache.org/jira/browse/YARN-737
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-737.1.patch, YARN-737.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-737) ContainerManagerImpl can directly throw NMNotYetReadyException and InvalidContainerException after YARN-142

2013-06-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676456#comment-13676456
 ] 

Jian He commented on YARN-737:
--

Since YarnRemoteException is renamed as YarnException. Updated the patch to let 
these two exceptions extend from YarnException

> ContainerManagerImpl can directly throw NMNotYetReadyException and 
> InvalidContainerException after YARN-142 
> 
>
> Key: YARN-737
> URL: https://issues.apache.org/jira/browse/YARN-737
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-737.1.patch, YARN-737.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-737) ContainerManagerImpl can directly throw NMNotYetReadyException and InvalidContainerException after YARN-142

2013-06-05 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-737:
-

Attachment: YARN-737.2.patch

> ContainerManagerImpl can directly throw NMNotYetReadyException and 
> InvalidContainerException after YARN-142 
> 
>
> Key: YARN-737
> URL: https://issues.apache.org/jira/browse/YARN-737
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-737.1.patch, YARN-737.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-766) TestNodeManagerShutdown should use Shell to form the output path

2013-06-05 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-766:
-

Summary: TestNodeManagerShutdown should use Shell to form the output path  
(was: TestNodeManagerShutdown should use Shell to form the output bath)

> TestNodeManagerShutdown should use Shell to form the output path
> 
>
> Key: YARN-766
> URL: https://issues.apache.org/jira/browse/YARN-766
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Priority: Minor
> Attachments: YARN-766.txt
>
>
> File scriptFile = new File(tmpDir, "scriptFile.sh");
> should be replaced with
> File scriptFile = Shell.appendScriptExtension(tmpDir, "scriptFile");
> to match trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-766) TestNodeManagerShutdown should use Shell to form the output bath

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676382#comment-13676382
 ] 

Hadoop QA commented on YARN-766:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586394/YARN-766.txt
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1129//console

This message is automatically generated.

> TestNodeManagerShutdown should use Shell to form the output bath
> 
>
> Key: YARN-766
> URL: https://issues.apache.org/jira/browse/YARN-766
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Priority: Minor
> Attachments: YARN-766.txt
>
>
> File scriptFile = new File(tmpDir, "scriptFile.sh");
> should be replaced with
> File scriptFile = Shell.appendScriptExtension(tmpDir, "scriptFile");
> to match trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-766) TestNodeManagerShutdown should use Shell to form the output bath

2013-06-05 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated YARN-766:


Attachment: YARN-766.txt

Trivial patch. Only applicable to branch-2 (not trunk). I'm guessing the test 
currently fails on windows. 


> TestNodeManagerShutdown should use Shell to form the output bath
> 
>
> Key: YARN-766
> URL: https://issues.apache.org/jira/browse/YARN-766
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Siddharth Seth
>Priority: Minor
> Attachments: YARN-766.txt
>
>
> File scriptFile = new File(tmpDir, "scriptFile.sh");
> should be replaced with
> File scriptFile = Shell.appendScriptExtension(tmpDir, "scriptFile");
> to match trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-642) Fix up /nodes REST API to have 1 param and be consistent with the Java API

2013-06-05 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676375#comment-13676375
 ] 

Vinod Kumar Vavilapalli commented on YARN-642:
--

bq. Vinod Kumar Vavilapalli, is the latest patch satisfactory to you? Just 
verified that it still applies cleanly to trunk.
Will look at it by EOD today..

> Fix up /nodes REST API to have 1 param and be consistent with the Java API
> --
>
> Key: YARN-642
> URL: https://issues.apache.org/jira/browse/YARN-642
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>  Labels: incompatible
> Attachments: YARN-642-1.patch, YARN-642-2.patch, YARN-642-2.patch, 
> YARN-642.patch
>
>
> The code behind the /nodes RM REST API is unnecessarily muddled, logs the 
> same misspelled INFO message repeatedly, and does not return unhealthy nodes, 
> even when asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-766) TestNodeManagerShutdown should use Shell to form the output bath

2013-06-05 Thread Siddharth Seth (JIRA)
Siddharth Seth created YARN-766:
---

 Summary: TestNodeManagerShutdown should use Shell to form the 
output bath
 Key: YARN-766
 URL: https://issues.apache.org/jira/browse/YARN-766
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Priority: Minor


File scriptFile = new File(tmpDir, "scriptFile.sh");
should be replaced with
File scriptFile = Shell.appendScriptExtension(tmpDir, "scriptFile");

to match trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (YARN-765) AggregatedLogDeletionService may delete recent stuff if its clock is out of sync

2013-06-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved MAPREDUCE-5305 to YARN-765:


  Component/s: (was: jobhistoryserver)
   api
Affects Version/s: (was: 2.1.0-beta)
   (was: 3.0.0)
   2.1.0-beta
   3.0.0
  Key: YARN-765  (was: MAPREDUCE-5305)
  Project: Hadoop YARN  (was: Hadoop Map/Reduce)

> AggregatedLogDeletionService may delete recent stuff if its clock is out of 
> sync
> 
>
> Key: YARN-765
> URL: https://issues.apache.org/jira/browse/YARN-765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
>
> The {{AggregatedLogDeletionService}} compares file time with current time 
> before deciding whether to delete things. If the clock on the system is 
> sufficiently wrong, it may overreact.
> If this is felt to be an issue, the fix would be for it to create a file on 
> the DFS, stat it, and work out out the offset.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676271#comment-13676271
 ] 

Hadoop QA commented on YARN-686:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586368/YARN-686.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests:

  org.apache.hadoop.yarn.client.cli.TestYarnCLI
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestNodesPage

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1128//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1128//console

This message is automatically generated.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676264#comment-13676264
 ] 

Alejandro Abdelnur commented on YARN-686:
-

now it seems to be working, lets see.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676254#comment-13676254
 ] 

Sandy Ryza commented on YARN-686:
-

I'm puzzled by what's going on here.  "mvn clean install -DskipTests" works for 
me locally.  Will look further into it.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676246#comment-13676246
 ] 

Hadoop QA commented on YARN-686:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586361/YARN-686.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1127//console

This message is automatically generated.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-686) Flatten NodeReport

2013-06-05 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-686:


Attachment: YARN-686.patch

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch, YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-642) Fix up /nodes REST API to have 1 param and be consistent with the Java API

2013-06-05 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676233#comment-13676233
 ] 

Alejandro Abdelnur commented on YARN-642:
-

+1

> Fix up /nodes REST API to have 1 param and be consistent with the Java API
> --
>
> Key: YARN-642
> URL: https://issues.apache.org/jira/browse/YARN-642
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>  Labels: incompatible
> Attachments: YARN-642-1.patch, YARN-642-2.patch, YARN-642-2.patch, 
> YARN-642.patch
>
>
> The code behind the /nodes RM REST API is unnecessarily muddled, logs the 
> same misspelled INFO message repeatedly, and does not return unhealthy nodes, 
> even when asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676206#comment-13676206
 ] 

Hadoop QA commented on YARN-686:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586361/YARN-686.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1126//console

This message is automatically generated.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated YARN-689:


Parent Issue: YARN-397  (was: YARN-386)

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676203#comment-13676203
 ] 

Sandy Ryza commented on YARN-686:
-

Posted a patch makes the proposed changes in NodeReport and RMNode, which backs 
it on the RM side.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated YARN-689:


Issue Type: Sub-task  (was: Bug)
Parent: YARN-386

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated YARN-689:


Target Version/s: 2.1.0-beta

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-686) Flatten NodeReport

2013-06-05 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-686:


Attachment: YARN-686.patch

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-686.patch
>
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676164#comment-13676164
 ] 

Alejandro Abdelnur commented on YARN-689:
-

[~bikassaha], the {{MRAppMaster}} obtains the MIN/MAX values from the 
registration response, see the {{RMCommunicator#register()}} method.

And, in the {{RMContainerAllocator#handleEvent()}}} the resource is being 
normalized, for example for Map tasks:

{code}
  int minSlotMemSize = getMinContainerCapability().getMemory();
  mapResourceReqt = (int) Math.ceil((float) 
mapResourceReqt/minSlotMemSize)
  * minSlotMemSize;
{code}

And this normalization is also 'misusing' the current minimum as the increment 
for the normalization.

As a follow up JIRA to this one I plan to fix MR to use the increment instead.

There is value for an AM to know the normalized capacity as based on that it 
can decide at allocation request time how to plan its processing distribution 
and further allocations (if I ask for 1.2 GB and I'll be getting 2GB I can bump 
at planning phase how much will do in that container and correct my subsequent 
allocation requests to be less).

Also, are you suggesting to get rid of min/max from the API and make clients 
get such info from the config?

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-742) Log aggregation causes a lot of redundant setPermission calls

2013-06-05 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated YARN-742:


Hadoop Flags: Reviewed

> Log aggregation causes a lot of redundant setPermission calls
> -
>
> Key: YARN-742
> URL: https://issues.apache.org/jira/browse/YARN-742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 0.23.7, 2.0.4-alpha
>Reporter: Kihwal Lee
>Assignee: Jason Lowe
> Fix For: 3.0.0, 2.1.0-beta, 0.23.9
>
> Attachments: YARN-742-1.branch-0.23.patch, YARN-742-1.patch, 
> YARN-742.patch
>
>
> In one of our clusters, namenode RPC is spending 45% of its time on serving 
> setPermission calls. Further investigation has revealed that most calls are 
> redundantly made on /mapred/logs//logs. Also mkdirs calls are made 
> before this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676121#comment-13676121
 ] 

Bikas Saha commented on YARN-689:
-

I can see some merit in different schedulers trying out different methods to 
improve fragmentation and utilization in the cluster. The multiplier seems to 
be one approach to do this, though as Arun says it might make the code in the 
scheduler complicated to get right. However, what I don't understand is why 
this value needs to be exposed to the users via registration response. Users 
are expected to ask for whatever resource they want to and they will be 
allocated containers that would fit those resources. Containers will never be 
smaller but may be larger than whats requested, depending on the logic 
schedulers use to solve the bin-packing problem. I dont see why configs for 
that logic should be exposed via API. This should be transparent to the users 
and probably should be hidden from them so that schedulers can evolve. Finally, 
an AM that really needs this (I don't yet see why) could probably get the value 
from configuration.

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676096#comment-13676096
 ] 

Tom White commented on YARN-689:


> the change in the yarn-default.xml from 32 to 4 is because 
> YarnConfiguration.java defines the default to 4, so it is just putting them 
> in synch

That makes sense.

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676019#comment-13676019
 ] 

Alejandro Abdelnur commented on YARN-689:
-

[~tomwhite], thanks for looking at the patch, the change in the 
yarn-default.xml from 32 to 4 is because {{YarnConfiguration.java}} defines the 
default to 4, so it is just putting them in synch. It could be also the other 
way around, setting the Java constant to 32, but that seemed a too high as 
default.

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-689) Add multiplier unit to resourcecapabilities

2013-06-05 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675988#comment-13675988
 ] 

Tom White commented on YARN-689:



I had a look at the latest patch. Why is 
yarn.scheduler.maximum-allocation-vcores in yarn-default.xml changed from 32 to 
4? Apart from that change it looks good to me. +1

Arun, I don't know why you are so resistant to this change. It is a 
backwards-compatible change for users. Indeed it doesn't change the behaviour 
at all by default. It's not a complicated or risky patch. It adds an extra 
configuration knob so that instead of the increment being the minimum value (of 
memory or cores) it can be a different value to allow for more fine-grained 
control in some cases, such as the one that Sandy 
[outlined|https://issues.apache.org/jira/browse/YARN-689?focusedCommentId=13661159&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13661159].
 How would you configure YARN to handle this situation?

I don't understand the point about fragmentation either. In the scenario you 
[describe 
above|https://issues.apache.org/jira/browse/YARN-689?focusedCommentId=13673517&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13673517],
 I think you are talking about a situation where AMs are asking for containers 
that are greater than 1GB in size, e.g. 1.7 or 1.9GB. If so, then today if you 
set the minimum to 128MB (to give finer-grained increments) you open yourself 
up to the same possibility of fragmentation since there could be nodes with 
<1GB left if AMs ask for over 1GB. The patch in this JIRA will not change that 
situation (set minimum to 1GB, increment to 128MB) - there will still be the 
same number of nodes with <1GB left and the AMs will not be able to use them. 
So I don't see how this change makes the situation worse.

In fact I think this change could improve utilization. If you set the minimum 
to 1GB then a request for 1.7 or 1.9GB will be changed to 2GB, but this might 
not be what the user wants since the extra memory may not be needed (after all, 
the AM didn't ask for it) and is a form of poor utilization since it means 
there's certainly no way that memory can be used by other containers. In the 
case of a smaller increment (128MB) the extra memory would not be allocated to 
containers that are not using it, so it can be used by other containers, e.g. 3 
x 1.7GB requests and one 1.9GB request could fit in a node with 7GB, whereas 
only three of them could fit if the minimum was 1GB.

> Add multiplier unit to resourcecapabilities
> ---
>
> Key: YARN-689
> URL: https://issues.apache.org/jira/browse/YARN-689
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, scheduler
>Affects Versions: 2.0.4-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: YARN-689.patch, YARN-689.patch, YARN-689.patch, 
> YARN-689.patch, YARN-689.patch
>
>
> Currently we overloading the minimum resource value as the actual multiplier 
> used by the scheduler.
> Today with a minimum memory set to 1GB, requests for 1.5GB are always 
> translated to allocation of 2GB.
> We should decouple the minimum allocation from the multiplier.
> The multiplier should also be exposed to the client via the 
> RegisterApplicationMasterResponse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-756) Move PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest to api.records

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675943#comment-13675943
 ] 

Hudson commented on YARN-756:
-

Integrated in Hadoop-Mapreduce-trunk #1447 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1447/])
YARN-756. Remnants from moving Preemption* records to yarn.api where they 
really belong. Contributed by Jian He. (Revision 1489614)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489614
Files : 
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/PreemptionResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/PreemptionResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/PreemptionContract.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/PreemptionResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionContainerPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionMessagePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/StrictPreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContainerPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionMessagePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java


> Move 
> PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest
>  to api.records
> --
>
> Key: YARN-756
> URL: https://issues.apache.org/jira/browse/YARN-756
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-756.1.patch, YARN-756.2.patch, YARN-756.3.patch, 
> YARN-756.5.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-753) Add individual factory method for api protocol records

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675940#comment-13675940
 ] 

Hudson commented on YARN-753:
-

Integrated in Hadoop-Mapreduce-trunk #1447 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1447/])
YARN-753. Added individual factory methods for all api protocol records and 
converted the records to be abstract classes. Contributed by Jian He. (Revision 
1489644)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489644
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNewApplicationRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNewApplicationResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueInfoRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueInfoResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueUserAclsInfoRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueUserAclsInfoResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshAdminAclsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshAdminAclsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshNodesRequest

[jira] [Commented] (YARN-742) Log aggregation causes a lot of redundant setPermission calls

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675939#comment-13675939
 ] 

Hudson commented on YARN-742:
-

Integrated in Hadoop-Mapreduce-trunk #1447 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1447/])
YARN-742. Log aggregation causes a lot of redundant setPermission calls. 
Contributed by Jason Lowe. (Revision 1489596)

 Result = SUCCESS
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489596
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java


> Log aggregation causes a lot of redundant setPermission calls
> -
>
> Key: YARN-742
> URL: https://issues.apache.org/jira/browse/YARN-742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 0.23.7, 2.0.4-alpha
>Reporter: Kihwal Lee
>Assignee: Jason Lowe
> Attachments: YARN-742-1.branch-0.23.patch, YARN-742-1.patch, 
> YARN-742.patch
>
>
> In one of our clusters, namenode RPC is spending 45% of its time on serving 
> setPermission calls. Further investigation has revealed that most calls are 
> redundantly made on /mapred/logs//logs. Also mkdirs calls are made 
> before this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-724) Move ProtoBase from api.records to api.records.impl.pb

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675944#comment-13675944
 ] 

Hudson commented on YARN-724:
-

Integrated in Hadoop-Mapreduce-trunk #1447 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1447/])
YARN-724. Moved ProtoBase from api.records to api.records.impl.pb. 
Contributed by Jian He.
MAPREDUCE-5303. Changed MR app after moving ProtoBase to package impl.pb via 
YARN-724. Contributed by Jian He. (Revision 1489658)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489658
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/CancelDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/CancelDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/FailTaskAttemptRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/FailTaskAttemptResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetCountersRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetCountersResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDiagnosticsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDiagnosticsResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetJobReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetJobReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptCompletionEventsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptCompletionEventsResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportsRespon

[jira] [Commented] (YARN-764) blank Used Resources on Capacity Scheduler page

2013-06-05 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675917#comment-13675917
 ] 

Thomas Graves commented on YARN-764:


Nemon, thanks for looking at this.  I guess its because it uses the raw "<" 
character instead of "<". 

Another option, which I would prefer, is just to escape the string using 
StringEscapeUtils.escapeHtml() in CapacitySchedulerPage.java.  That way if 
someone adds something else to the string in the future or it accidentally gets 
changed back it will still work.

> blank Used Resources on Capacity Scheduler page 
> 
>
> Key: YARN-764
> URL: https://issues.apache.org/jira/browse/YARN-764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: nemon lou
>Assignee: nemon lou
> Attachments: YARN-764.patch
>
>
> Even when there are jobs running,used resources is empty on Capacity 
> Scheduler page for leaf queue.(I use google-chrome on windows 7.)
> After changing resource.java's toString method by replacing "<>" with 
> "{}",this bug gets fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675897#comment-13675897
 ] 

Hadoop QA commented on YARN-760:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586303/YARN-760.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  
org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1125//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1125//console

This message is automatically generated.

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Niranjan Singh
>  Labels: newbie
> Attachments: YARN-760.patch, YARN-760.patch
>
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-760:
---

Assignee: Niranjan Singh  (was: Jason Lowe)

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Niranjan Singh
>  Labels: newbie
> Attachments: YARN-760.patch, YARN-760.patch
>
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-760:
---

Assignee: Jason Lowe

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Jason Lowe
>  Labels: newbie
> Attachments: YARN-760.patch, YARN-760.patch
>
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-742) Log aggregation causes a lot of redundant setPermission calls

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675886#comment-13675886
 ] 

Hudson commented on YARN-742:
-

Integrated in Hadoop-Hdfs-trunk #1421 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1421/])
YARN-742. Log aggregation causes a lot of redundant setPermission calls. 
Contributed by Jason Lowe. (Revision 1489596)

 Result = FAILURE
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489596
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java


> Log aggregation causes a lot of redundant setPermission calls
> -
>
> Key: YARN-742
> URL: https://issues.apache.org/jira/browse/YARN-742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 0.23.7, 2.0.4-alpha
>Reporter: Kihwal Lee
>Assignee: Jason Lowe
> Attachments: YARN-742-1.branch-0.23.patch, YARN-742-1.patch, 
> YARN-742.patch
>
>
> In one of our clusters, namenode RPC is spending 45% of its time on serving 
> setPermission calls. Further investigation has revealed that most calls are 
> redundantly made on /mapred/logs//logs. Also mkdirs calls are made 
> before this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-756) Move PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest to api.records

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675890#comment-13675890
 ] 

Hudson commented on YARN-756:
-

Integrated in Hadoop-Hdfs-trunk #1421 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1421/])
YARN-756. Remnants from moving Preemption* records to yarn.api where they 
really belong. Contributed by Jian He. (Revision 1489614)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489614
Files : 
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/PreemptionResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/PreemptionResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/PreemptionContract.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/PreemptionResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionContainerPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionMessagePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/StrictPreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContainerPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionMessagePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java


> Move 
> PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest
>  to api.records
> --
>
> Key: YARN-756
> URL: https://issues.apache.org/jira/browse/YARN-756
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-756.1.patch, YARN-756.2.patch, YARN-756.3.patch, 
> YARN-756.5.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-724) Move ProtoBase from api.records to api.records.impl.pb

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675891#comment-13675891
 ] 

Hudson commented on YARN-724:
-

Integrated in Hadoop-Hdfs-trunk #1421 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1421/])
YARN-724. Moved ProtoBase from api.records to api.records.impl.pb. 
Contributed by Jian He.
MAPREDUCE-5303. Changed MR app after moving ProtoBase to package impl.pb via 
YARN-724. Contributed by Jian He. (Revision 1489658)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489658
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/CancelDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/CancelDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/FailTaskAttemptRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/FailTaskAttemptResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetCountersRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetCountersResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDiagnosticsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDiagnosticsResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetJobReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetJobReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptCompletionEventsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptCompletionEventsResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportsResponsePBImpl.j

[jira] [Commented] (YARN-753) Add individual factory method for api protocol records

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675887#comment-13675887
 ] 

Hudson commented on YARN-753:
-

Integrated in Hadoop-Hdfs-trunk #1421 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1421/])
YARN-753. Added individual factory methods for all api protocol records and 
converted the records to be abstract classes. Contributed by Jian He. (Revision 
1489644)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489644
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNewApplicationRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNewApplicationResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueInfoRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueInfoResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueUserAclsInfoRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueUserAclsInfoResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshAdminAclsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshAdminAclsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshNodesRequest.java
* 
/

[jira] [Commented] (YARN-677) Add test methods in org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675882#comment-13675882
 ] 

Hadoop QA commented on YARN-677:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12586309/HDFS-4536-trunk-N9.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1124//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1124//console

This message is automatically generated.

> Add test methods in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
> --
>
> Key: YARN-677
> URL: https://issues.apache.org/jira/browse/YARN-677
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-4536-branch-2-a.patch, 
> HADOOP-4536-branch-2c.patch, HADOOP-4536-trunk-a.patch, 
> HADOOP-4536-trunk-c.patch, HDFS-4536-branch-2--N7.patch, 
> HDFS-4536-branch-2--N8.patch, HDFS-4536-branch-2-N9.patch, 
> HDFS-4536-trunk--N6.patch, HDFS-4536-trunk--N7.patch, 
> HDFS-4536-trunk--N8.patch, HDFS-4536-trunk-N9.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675881#comment-13675881
 ] 

Junping Du commented on YARN-760:
-

+1. Patch looks good to me. Kick off jenkins for you.

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>  Labels: newbie
> Attachments: YARN-760.patch, YARN-760.patch
>
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-169) Update log4j.appender.EventCounter to use org.apache.hadoop.log.metrics.EventCounter

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675878#comment-13675878
 ] 

Hudson commented on YARN-169:
-

Integrated in Hadoop-Hdfs-0.23-Build #629 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/629/])
YARN-169. Update log4j.appender.EventCounter to use 
org.apache.hadoop.log.metrics.EventCounter (Anthony Rojas via jeagles) 
(Revision 1489584)

 Result = SUCCESS
jeagles : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489584
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties


> Update log4j.appender.EventCounter to use 
> org.apache.hadoop.log.metrics.EventCounter
> 
>
> Key: YARN-169
> URL: https://issues.apache.org/jira/browse/YARN-169
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.0.0-alpha
>Reporter: Anthony Rojas
>Assignee: Anthony Rojas
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.9
>
> Attachments: YARN-169.patch
>
>
> We should update the log4j.appender.EventCounter in 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties
>  to use *org.apache.hadoop.log.metrics.EventCounter* rather than 
> *org.apache.hadoop.metrics.jvm.EventCounter* to avoid triggering the 
> following warning:
> {code}WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. 
> Please use org.apache.hadoop.log.metrics.EventCounter in all the 
> log4j.properties files{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-764) blank Used Resources on Capacity Scheduler page

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675874#comment-13675874
 ] 

Hadoop QA commented on YARN-764:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586305/YARN-764.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1123//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1123//console

This message is automatically generated.

> blank Used Resources on Capacity Scheduler page 
> 
>
> Key: YARN-764
> URL: https://issues.apache.org/jira/browse/YARN-764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: nemon lou
>Assignee: nemon lou
> Attachments: YARN-764.patch
>
>
> Even when there are jobs running,used resources is empty on Capacity 
> Scheduler page for leaf queue.(I use google-chrome on windows 7.)
> After changing resource.java's toString method by replacing "<>" with 
> "{}",this bug gets fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-677) Add test methods in org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

2013-06-05 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated YARN-677:


Attachment: HDFS-4536-trunk-N9.patch
HDFS-4536-branch-2-N9.patch

> Add test methods in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
> --
>
> Key: YARN-677
> URL: https://issues.apache.org/jira/browse/YARN-677
> Project: Hadoop YARN
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-4536-branch-2-a.patch, 
> HADOOP-4536-branch-2c.patch, HADOOP-4536-trunk-a.patch, 
> HADOOP-4536-trunk-c.patch, HDFS-4536-branch-2--N7.patch, 
> HDFS-4536-branch-2--N8.patch, HDFS-4536-branch-2-N9.patch, 
> HDFS-4536-trunk--N6.patch, HDFS-4536-trunk--N7.patch, 
> HDFS-4536-trunk--N8.patch, HDFS-4536-trunk-N9.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-764) blank Used Resources on Capacity Scheduler page

2013-06-05 Thread nemon lou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nemon lou updated YARN-764:
---

Attachment: YARN-764.patch

No test case added since it's only a symbol change in toString()

> blank Used Resources on Capacity Scheduler page 
> 
>
> Key: YARN-764
> URL: https://issues.apache.org/jira/browse/YARN-764
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.0.4-alpha
>Reporter: nemon lou
>Assignee: nemon lou
> Attachments: YARN-764.patch
>
>
> Even when there are jobs running,used resources is empty on Capacity 
> Scheduler page for leaf queue.(I use google-chrome on windows 7.)
> After changing resource.java's toString method by replacing "<>" with 
> "{}",this bug gets fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Niranjan Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niranjan Singh updated YARN-760:


Attachment: YARN-760.patch

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>  Labels: newbie
> Attachments: YARN-760.patch, YARN-760.patch
>
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-276) Capacity Scheduler can hang when submit many jobs concurrently

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675840#comment-13675840
 ] 

Hadoop QA commented on YARN-276:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586295/YARN-276.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1122//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1122//console

This message is automatically generated.

> Capacity Scheduler can hang when submit many jobs concurrently
> --
>
> Key: YARN-276
> URL: https://issues.apache.org/jira/browse/YARN-276
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0, 2.0.1-alpha
>Reporter: nemon lou
>Assignee: nemon lou
>  Labels: incompatible
> Attachments: YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In hadoop2.0.1,When i submit many jobs concurrently at the same time,Capacity 
> scheduler can hang with most resources taken up by AM and don't have enough 
> resources for tasks.And then all applications hang there.
> The cause is that "yarn.scheduler.capacity.maximum-am-resource-percent" not 
> check directly.Instead ,this property only used for maxActiveApplications. 
> And maxActiveApplications is computed by minimumAllocation (not by Am 
> actually used).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Niranjan Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niranjan Singh updated YARN-760:


Attachment: YARN-760.patch

Sorry,for the earlier comment it is RuntimeException and not YarnException and 
also removed AvroRuntimeException import 

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>  Labels: newbie
> Attachments: YARN-760.patch
>
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-764) blank Used Resources on Capacity Scheduler page

2013-06-05 Thread nemon lou (JIRA)
nemon lou created YARN-764:
--

 Summary: blank Used Resources on Capacity Scheduler page 
 Key: YARN-764
 URL: https://issues.apache.org/jira/browse/YARN-764
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: nemon lou
Assignee: nemon lou


Even when there are jobs running,used resources is empty on Capacity Scheduler 
page for leaf queue.(I use google-chrome on windows 7.)
After changing resource.java's toString method by replacing "<>" with "{}",this 
bug gets fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-276) Capacity Scheduler can hang when submit many jobs concurrently

2013-06-05 Thread nemon lou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nemon lou updated YARN-276:
---

Attachment: YARN-276.patch

> Capacity Scheduler can hang when submit many jobs concurrently
> --
>
> Key: YARN-276
> URL: https://issues.apache.org/jira/browse/YARN-276
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0, 2.0.1-alpha
>Reporter: nemon lou
>Assignee: nemon lou
>  Labels: incompatible
> Attachments: YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In hadoop2.0.1,When i submit many jobs concurrently at the same time,Capacity 
> scheduler can hang with most resources taken up by AM and don't have enough 
> resources for tasks.And then all applications hang there.
> The cause is that "yarn.scheduler.capacity.maximum-am-resource-percent" not 
> check directly.Instead ,this property only used for maxActiveApplications. 
> And maxActiveApplications is computed by minimumAllocation (not by Am 
> actually used).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Niranjan Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niranjan Singh updated YARN-760:


Attachment: (was: YARN-760.patch)

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>  Labels: newbie
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-760) NodeManager throws AvroRuntimeException on failed start

2013-06-05 Thread Niranjan Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niranjan Singh updated YARN-760:


Attachment: YARN-760.patch

Wrapped the exception with YarnException

> NodeManager throws AvroRuntimeException on failed start
> ---
>
> Key: YARN-760
> URL: https://issues.apache.org/jira/browse/YARN-760
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>  Labels: newbie
> Attachments: YARN-760.patch
>
>
> NodeManager wraps exceptions that occur in its start method in 
> AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-724) Move ProtoBase from api.records to api.records.impl.pb

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675795#comment-13675795
 ] 

Hudson commented on YARN-724:
-

Integrated in Hadoop-Yarn-trunk #231 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/231/])
YARN-724. Moved ProtoBase from api.records to api.records.impl.pb. 
Contributed by Jian He.
MAPREDUCE-5303. Changed MR app after moving ProtoBase to package impl.pb via 
YARN-724. Contributed by Jian He. (Revision 1489658)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489658
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/CancelDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/CancelDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/FailTaskAttemptRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/FailTaskAttemptResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetCountersRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetCountersResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDiagnosticsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDiagnosticsResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetJobReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetJobReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptCompletionEventsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptCompletionEventsResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskAttemptReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportsRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetTaskReportsResponsePBImpl.jav

[jira] [Commented] (YARN-753) Add individual factory method for api protocol records

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675791#comment-13675791
 ] 

Hudson commented on YARN-753:
-

Integrated in Hadoop-Yarn-trunk #231 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/231/])
YARN-753. Added individual factory methods for all api protocol records and 
converted the records to be abstract classes. Contributed by Jian He. (Revision 
1489644)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489644
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/CancelDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetAllApplicationsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationReportResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterMetricsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetDelegationTokenRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetDelegationTokenResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNewApplicationRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetNewApplicationResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueInfoRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueInfoResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueUserAclsInfoRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetQueueUserAclsInfoResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshAdminAclsRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshAdminAclsResponse.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RefreshNodesRequest.java
* 
/ha

[jira] [Commented] (YARN-756) Move PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest to api.records

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675794#comment-13675794
 ] 

Hudson commented on YARN-756:
-

Integrated in Hadoop-Yarn-trunk #231 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/231/])
YARN-756. Remnants from moving Preemption* records to yarn.api where they 
really belong. Contributed by Jian He. (Revision 1489614)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489614
Files : 
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/PreemptionResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/PreemptionResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/PreemptionContract.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/PreemptionResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionContainerPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/PreemptionMessagePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/StrictPreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContainerPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionMessagePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/PreemptionResourceRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/StrictPreemptionContractPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java


> Move 
> PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest
>  to api.records
> --
>
> Key: YARN-756
> URL: https://issues.apache.org/jira/browse/YARN-756
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.1.0-beta
>
> Attachments: YARN-756.1.patch, YARN-756.2.patch, YARN-756.3.patch, 
> YARN-756.5.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-742) Log aggregation causes a lot of redundant setPermission calls

2013-06-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675790#comment-13675790
 ] 

Hudson commented on YARN-742:
-

Integrated in Hadoop-Yarn-trunk #231 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/231/])
YARN-742. Log aggregation causes a lot of redundant setPermission calls. 
Contributed by Jason Lowe. (Revision 1489596)

 Result = SUCCESS
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1489596
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java


> Log aggregation causes a lot of redundant setPermission calls
> -
>
> Key: YARN-742
> URL: https://issues.apache.org/jira/browse/YARN-742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 0.23.7, 2.0.4-alpha
>Reporter: Kihwal Lee
>Assignee: Jason Lowe
> Attachments: YARN-742-1.branch-0.23.patch, YARN-742-1.patch, 
> YARN-742.patch
>
>
> In one of our clusters, namenode RPC is spending 45% of its time on serving 
> setPermission calls. Further investigation has revealed that most calls are 
> redundantly made on /mapred/logs//logs. Also mkdirs calls are made 
> before this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-276) Capacity Scheduler can hang when submit many jobs concurrently

2013-06-05 Thread nemon lou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675778#comment-13675778
 ] 

nemon lou commented on YARN-276:


Hi Thomas,thank you for your review.
The "used resource not showing up" issue seems like a bug that already exists.i 
will fire another jira for it.(Resource.java's toString() method uses symbol 
"<>",which is ignored by explorers)
The "divide by zero exception" problem has not been fixed as i haven't find 
which piece of code can cause it.
Other review comments will been accepted in latest patch.Thanks.

After reconsidering user limit, i find property 
"maxAMResourcePerQueuePerUserPercent" added by me is not a proper one.It will 
be removed and checking maxAMResourcePerQueue for each user instead.


> Capacity Scheduler can hang when submit many jobs concurrently
> --
>
> Key: YARN-276
> URL: https://issues.apache.org/jira/browse/YARN-276
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0, 2.0.1-alpha
>Reporter: nemon lou
>Assignee: nemon lou
>  Labels: incompatible
> Attachments: YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch, YARN-276.patch, 
> YARN-276.patch, YARN-276.patch, YARN-276.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In hadoop2.0.1,When i submit many jobs concurrently at the same time,Capacity 
> scheduler can hang with most resources taken up by AM and don't have enough 
> resources for tasks.And then all applications hang there.
> The cause is that "yarn.scheduler.capacity.maximum-am-resource-percent" not 
> check directly.Instead ,this property only used for maxActiveApplications. 
> And maxActiveApplications is computed by minimumAllocation (not by Am 
> actually used).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-117) Enhance YARN service model

2013-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675769#comment-13675769
 ] 

Hadoop QA commented on YARN-117:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586218/YARN-117-019.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 39 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 6 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.conf.TestConfiguration
  org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryParsing
  org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryEvents
  org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryServer
  org.apache.hadoop.yarn.client.TestNMClientAsync

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1121//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1121//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-app.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1121//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-client.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1121//console

This message is automatically generated.

> Enhance YARN service model
> --
>
> Key: YARN-117
> URL: https://issues.apache.org/jira/browse/YARN-117
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.0.4-alpha
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-117-007.patch, YARN-117-008.patch, 
> YARN-117-009.patch, YARN-117-010.patch, YARN-117-011.patch, 
> YARN-117-012.patch, YARN-117-013.patch, YARN-117-014.patch, 
> YARN-117-015.patch, YARN-117-016.patch, YARN-117-018.patch, 
> YARN-117-019.patch, YARN-117-2.patch, YARN-117-3.patch, YARN-117.4.patch, 
> YARN-117.5.patch, YARN-117.6.patch, YARN-117.patch
>
>
> Having played the YARN service model, there are some issues
> that I've identified based on past work and initial use.
> This JIRA issue is an overall one to cover the issues, with solutions pushed 
> out to separate JIRAs.
> h2. state model prevents stopped state being entered if you could not 
> successfully start the service.
> In the current lifecycle you cannot stop a service unless it was successfully 
> started, but
> * {{init()}} may acquire resources that need to be explicitly released
> * if the {{start()}} operation fails partway through, the {{stop()}} 
> operation may be needed to release resources.
> *Fix:* make {{stop()}} a valid state transition from all states and require 
> the implementations to be able to stop safely without requiring all fields to 
> be non null.
> Before anyone points out that the {{stop()}} operations assume that all 
> fields are valid; and if called before a {{start()}} they will NPE; 
> MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix 
> for this. It is independent of the rest o

[jira] [Commented] (YARN-686) Flatten NodeReport

2013-06-05 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675712#comment-13675712
 ] 

Sandy Ryza commented on YARN-686:
-

Yes, will post a patch tomorrow.

> Flatten NodeReport
> --
>
> Key: YARN-686
> URL: https://issues.apache.org/jira/browse/YARN-686
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.0.4-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>
> The NodeReport returned by getClusterNodes or given to AMs in heartbeat 
> responses includes both a NodeState (enum) and a NodeHealthStatus (object).  
> As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem 
> necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and 
> moving its two other methods, getHealthReport and getLastHealthReportTime, 
> into NodeReport.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira