[jira] [Updated] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1611:


Attachment: YARN-1611.9.patch

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.2.patch, YARN-1611.2.patch, 
 YARN-1611.3.patch, YARN-1611.3.patch, YARN-1611.4.patch, YARN-1611.5.patch, 
 YARN-1611.6.patch, YARN-1611.7.patch, YARN-1611.8.patch, YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1371#comment-1371
 ] 

Xuan Gong commented on YARN-1611:
-

The Fair Scheduler does not get its settings from a Configuration object. The 
fair-scheduler.xml file is in a different format than a typical Hadoop 
configuration file. So, let us handle refresh fair scheduler configuration 
separately. It will be tracked in 
https://issues.apache.org/jira/browse/YARN-1679

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.2.patch, YARN-1611.2.patch, 
 YARN-1611.3.patch, YARN-1611.3.patch, YARN-1611.4.patch, YARN-1611.5.patch, 
 YARN-1611.6.patch, YARN-1611.7.patch, YARN-1611.8.patch, YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1611:


Attachment: YARN-1611.9.patch

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.2.patch, YARN-1611.2.patch, 
 YARN-1611.3.patch, YARN-1611.3.patch, YARN-1611.4.patch, YARN-1611.5.patch, 
 YARN-1611.6.patch, YARN-1611.7.patch, YARN-1611.8.patch, YARN-1611.9.patch, 
 YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1383#comment-1383
 ] 

Hadoop QA commented on YARN-1611:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626519/YARN-1611.9.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2985//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2985//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2985//console

This message is automatically generated.

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.2.patch, YARN-1611.2.patch, 
 YARN-1611.3.patch, YARN-1611.3.patch, YARN-1611.4.patch, YARN-1611.5.patch, 
 YARN-1611.6.patch, YARN-1611.7.patch, YARN-1611.8.patch, YARN-1611.9.patch, 
 YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1385#comment-1385
 ] 

Hadoop QA commented on YARN-1611:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626520/YARN-1611.9.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2986//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2986//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2986//console

This message is automatically generated.

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.2.patch, YARN-1611.2.patch, 
 YARN-1611.3.patch, YARN-1611.3.patch, YARN-1611.4.patch, YARN-1611.5.patch, 
 YARN-1611.6.patch, YARN-1611.7.patch, YARN-1611.8.patch, YARN-1611.9.patch, 
 YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1659) Define ApplicationTimelineStore interface and store-facing entity, entity-info and event objects

2014-02-02 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-1659:
-

Attachment: YARN-1659-7.patch

Okay, here's a new patch. [~zjshen], you'll have to fix the translation from 
query parameters to fields in YARN-1636 (among many other changes).

 Define ApplicationTimelineStore interface and store-facing entity, 
 entity-info and event objects
 

 Key: YARN-1659
 URL: https://issues.apache.org/jira/browse/YARN-1659
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi
 Attachments: YARN-1659-1.patch, YARN-1659-3.patch, YARN-1659-4.patch, 
 YARN-1659-5.patch, YARN-1659-6.patch, YARN-1659-7.patch, YARN-1659.2.patch


 These will be used by ApplicationTimelineStore interface.  The web services 
 will convert the store-facing obects to the user-facing objects.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1681) When banned.users is not set in LCE's container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS will receive unclear error message

2014-02-02 Thread Zhichun Wu (JIRA)
Zhichun Wu created YARN-1681:


 Summary: When banned.users is not set in LCE's 
container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS will 
receive unclear error message
 Key: YARN-1681
 URL: https://issues.apache.org/jira/browse/YARN-1681
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
Reporter: Zhichun Wu
Priority: Minor


When using LCE in a secure setup, if banned.users is not set in 
container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS (mapred, 
hdfs, bin, 0)  will receive unclear error message.
for example, if we use hdfs to submit a mr job, we may see the following the 
yarn app overview page:
{code}
appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
Application application_1391353981633_0003 initialization failed (exitCode=139) 
with output: 
{code}

while the prefer error message may look like:
{code}
appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
Application application_1391353981633_0003 initialization failed (exitCode=139) 
with output: Requested user hdfs is banned 
{code}

just a minor bug and I would like to start contributing to hadoop-common with 
it:)




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1681) When banned.users is not set in LCE's container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS will receive unclear error message

2014-02-02 Thread Zhichun Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichun Wu updated YARN-1681:
-

Attachment: YARN-1681.patch

 When banned.users is not set in LCE's container-executor.cfg, submit job 
 with user in DEFAULT_BANNED_USERS will receive unclear error message
 ---

 Key: YARN-1681
 URL: https://issues.apache.org/jira/browse/YARN-1681
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
Reporter: Zhichun Wu
Priority: Minor
  Labels: container
 Attachments: YARN-1681.patch


 When using LCE in a secure setup, if banned.users is not set in 
 container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS 
 (mapred, hdfs, bin, 0)  will receive unclear error message.
 for example, if we use hdfs to submit a mr job, we may see the following the 
 yarn app overview page:
 {code}
 appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
 Application application_1391353981633_0003 initialization failed 
 (exitCode=139) with output: 
 {code}
 while the prefer error message may look like:
 {code}
 appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
 Application application_1391353981633_0003 initialization failed 
 (exitCode=139) with output: Requested user hdfs is banned 
 {code}
 just a minor bug and I would like to start contributing to hadoop-common with 
 it:)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1681) When banned.users is not set in LCE's container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS will receive unclear error message

2014-02-02 Thread Zhichun Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13888967#comment-13888967
 ] 

Zhichun Wu commented on YARN-1681:
--

When banned.users is not set, banned_users will be NULL in 
container-executor.c, and  free_values(banned_users); will be 
free_values(NULL); and error occurs:
{code}
 for(; *banned_user; ++banned_user) {
if (strcmp(*banned_user, user) == 0) {
  free(user_info);
  if (banned_users != (char**)DEFAULT_BANNED_USERS) {
free_values(banned_users);
  }
  fprintf(LOGFILE, Requested user %s is banned\n, user);
  return NULL;
}
  }
{code}

 When banned.users is not set in LCE's container-executor.cfg, submit job 
 with user in DEFAULT_BANNED_USERS will receive unclear error message
 ---

 Key: YARN-1681
 URL: https://issues.apache.org/jira/browse/YARN-1681
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
Reporter: Zhichun Wu
Priority: Minor
  Labels: container
 Attachments: YARN-1681.patch


 When using LCE in a secure setup, if banned.users is not set in 
 container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS 
 (mapred, hdfs, bin, 0)  will receive unclear error message.
 for example, if we use hdfs to submit a mr job, we may see the following the 
 yarn app overview page:
 {code}
 appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
 Application application_1391353981633_0003 initialization failed 
 (exitCode=139) with output: 
 {code}
 while the prefer error message may look like:
 {code}
 appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
 Application application_1391353981633_0003 initialization failed 
 (exitCode=139) with output: Requested user hdfs is banned 
 {code}
 just a minor bug and I would like to start contributing to hadoop-common with 
 it:)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1681) When banned.users is not set in LCE's container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS will receive unclear error message

2014-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13888976#comment-13888976
 ] 

Hadoop QA commented on YARN-1681:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626536/YARN-1681.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2987//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2987//console

This message is automatically generated.

 When banned.users is not set in LCE's container-executor.cfg, submit job 
 with user in DEFAULT_BANNED_USERS will receive unclear error message
 ---

 Key: YARN-1681
 URL: https://issues.apache.org/jira/browse/YARN-1681
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.2.0
Reporter: Zhichun Wu
Priority: Minor
  Labels: container
 Attachments: YARN-1681.patch


 When using LCE in a secure setup, if banned.users is not set in 
 container-executor.cfg, submit job with user in DEFAULT_BANNED_USERS 
 (mapred, hdfs, bin, 0)  will receive unclear error message.
 for example, if we use hdfs to submit a mr job, we may see the following the 
 yarn app overview page:
 {code}
 appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
 Application application_1391353981633_0003 initialization failed 
 (exitCode=139) with output: 
 {code}
 while the prefer error message may look like:
 {code}
 appattempt_1391353981633_0003_02 exited with exitCode: -1000 due to: 
 Application application_1391353981633_0003 initialization failed 
 (exitCode=139) with output: Requested user hdfs is banned 
 {code}
 just a minor bug and I would like to start contributing to hadoop-common with 
 it:)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Moved] (YARN-1682) TestRMRestart#testRMRestartSucceededApp occasionally fails

2014-02-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla moved MAPREDUCE-5738 to YARN-1682:
---

Key: YARN-1682  (was: MAPREDUCE-5738)
Project: Hadoop YARN  (was: Hadoop Map/Reduce)

 TestRMRestart#testRMRestartSucceededApp occasionally fails
 --

 Key: YARN-1682
 URL: https://issues.apache.org/jira/browse/YARN-1682
 Project: Hadoop YARN
  Issue Type: Test
Reporter: Ted Yu

 From https://builds.apache.org/job/Hadoop-Yarn-trunk/468/console :
 {code}
 testRMRestartSucceededApp(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
   Time elapsed: 8.129 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:92)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.verifyAppReportAfterRMRestart(TestRMRestart.java:900)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartSucceededApp(TestRMRestart.java:774)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1611:


Attachment: YARN-1611.10.patch

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.10.patch, 
 YARN-1611.2.patch, YARN-1611.2.patch, YARN-1611.3.patch, YARN-1611.3.patch, 
 YARN-1611.4.patch, YARN-1611.5.patch, YARN-1611.6.patch, YARN-1611.7.patch, 
 YARN-1611.8.patch, YARN-1611.9.patch, YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1683) Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.liveContainers

2014-02-02 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-1683:
---

 Summary: Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.liveContainers
 Key: YARN-1683
 URL: https://issues.apache.org/jira/browse/YARN-1683
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Xuan Gong


Jenkins reports this bug several times.
The details are: 
Bug type IS2_INCONSISTENT_SYNC (click for details) 
In class 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt
Field 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.liveContainers
Synchronized 91% of the time
Unsynchronized access at SchedulerApplicationAttempt.java:[line 439]
Synchronized access at SchedulerApplicationAttempt.java:[line 410]
Synchronized access at SchedulerApplicationAttempt.java:[line 67]
Synchronized access at SchedulerApplicationAttempt.java:[line 170]
Synchronized access at SchedulerApplicationAttempt.java:[line 423]
Synchronized access at SchedulerApplicationAttempt.java:[line 114]
Synchronized access at SchedulerApplicationAttempt.java:[line 403]
Synchronized access at FSSchedulerApp.java:[line 279]
Synchronized access at FSSchedulerApp.java:[line 94]
Synchronized access at FiCaSchedulerApp.java:[line 129]
Synchronized access at FiCaSchedulerApp.java:[line 77]
Synchronized access at FiCaSchedulerApp.java:[line 232]
Synchronized access at FiCaSchedulerApp.java:[line 209]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889083#comment-13889083
 ] 

Xuan Gong commented on YARN-1611:
-

bq. Document the config-provider property in yarn-default.xml

ADDED

bq. CapacityScheduler shouldn't check for 
YarnConfiguration.DEFAULT_RM_CONFIGURATION_PROVIDER_CLASS to determine if it 
should useLocalConfigurationProvider. It should directly compare with 
org.apache.hadoop.yarn.LocalConfigurationProvider. This in case the default 
changes in future.

CHANGED

bq. TestRMAdminService shouldn't write and delete config files in 
fs.getHomeDirectory(). Please use a test-specific directory in target. See 
BaseContainerManagerTest for example.

FIXED

bq. And did you already file a ticket for the trunk findBugs warning?

https://issues.apache.org/jira/browse/YARN-1683 is used to track

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.10.patch, 
 YARN-1611.2.patch, YARN-1611.2.patch, YARN-1611.3.patch, YARN-1611.3.patch, 
 YARN-1611.4.patch, YARN-1611.5.patch, YARN-1611.6.patch, YARN-1611.7.patch, 
 YARN-1611.8.patch, YARN-1611.9.patch, YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1611) Make admin refresh of capacity scheduler configuration work across RM failover

2014-02-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889096#comment-13889096
 ] 

Hadoop QA commented on YARN-1611:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12626567/YARN-1611.10.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2988//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2988//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2988//console

This message is automatically generated.

 Make admin refresh of capacity scheduler configuration work across RM failover
 --

 Key: YARN-1611
 URL: https://issues.apache.org/jira/browse/YARN-1611
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-1611.1.patch, YARN-1611.10.patch, 
 YARN-1611.2.patch, YARN-1611.2.patch, YARN-1611.3.patch, YARN-1611.3.patch, 
 YARN-1611.4.patch, YARN-1611.5.patch, YARN-1611.6.patch, YARN-1611.7.patch, 
 YARN-1611.8.patch, YARN-1611.9.patch, YARN-1611.9.patch


 Currently, If we do refresh* for a standby RM, it will failover to the 
 current active RM, and do the refresh* based on the local configuration file 
 of the active RM. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1498) Common scheduler changes for moving apps between queues

2014-02-02 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13889187#comment-13889187
 ] 

Karthik Kambatla commented on YARN-1498:


+1 to the addendum.

 Common scheduler changes for moving apps between queues
 ---

 Key: YARN-1498
 URL: https://issues.apache.org/jira/browse/YARN-1498
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 3.0.0

 Attachments: YARN-1498-1.patch, YARN-1498-addendum.patch, 
 YARN-1498.patch, YARN-1498.patch


 This JIRA is to track changes that aren't in particular schedulers but that 
 help them support moving apps between queues.  In particular, it makes sure 
 that QueueMetrics are properly updated when an app changes queue.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (YARN-1683) Inconsistent synchronization of org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.liveContainers

2014-02-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-1683.


Resolution: Duplicate

YARN-1498 caused this, and has an addendum patch to fix this. 

 Inconsistent synchronization of 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.liveContainers
 --

 Key: YARN-1683
 URL: https://issues.apache.org/jira/browse/YARN-1683
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Xuan Gong

 Jenkins reports this bug several times.
 The details are: 
 Bug type IS2_INCONSISTENT_SYNC (click for details) 
 In class 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt
 Field 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.liveContainers
 Synchronized 91% of the time
 Unsynchronized access at SchedulerApplicationAttempt.java:[line 439]
 Synchronized access at SchedulerApplicationAttempt.java:[line 410]
 Synchronized access at SchedulerApplicationAttempt.java:[line 67]
 Synchronized access at SchedulerApplicationAttempt.java:[line 170]
 Synchronized access at SchedulerApplicationAttempt.java:[line 423]
 Synchronized access at SchedulerApplicationAttempt.java:[line 114]
 Synchronized access at SchedulerApplicationAttempt.java:[line 403]
 Synchronized access at FSSchedulerApp.java:[line 279]
 Synchronized access at FSSchedulerApp.java:[line 94]
 Synchronized access at FiCaSchedulerApp.java:[line 129]
 Synchronized access at FiCaSchedulerApp.java:[line 77]
 Synchronized access at FiCaSchedulerApp.java:[line 232]
 Synchronized access at FiCaSchedulerApp.java:[line 209]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)