[jira] [Commented] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835237#comment-13835237
 ] 

Rekha Joshi commented on YARN-1457:
---

log has details without -Pnative and with -Dskiptest=true failures.


> YARN single node install issues on mvn clean install assembly:assembly on 
> mapreduce project
> ---
>
> Key: YARN-1457
> URL: https://issues.apache.org/jira/browse/YARN-1457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.5-alpha
>Reporter: Rekha Joshi
>Priority: Minor
>  Labels: mvn
> Attachments: yarn-mvn-mapreduce.txt
>
>
> YARN single node install - 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
> On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
> clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
> protobuf)
> But on hadoop-mapreduce-project mvn install fails for tests with below errors
> $ mvn clean install assembly:assembly -Pnative
> errors as in atatched yarn-mvn-mapreduce,txt
> On $mvn clean install assembly:assembly  -DskipTests
> Reactor Summary:
> [INFO] 
> [INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
> [INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
> [INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
> [INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
> [INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
> [INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
> [INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
> [INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
> [INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
> [INFO] hadoop-mapreduce .. FAILURE [10.107s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 49.606s
> [INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
> [INFO] Final Memory: 34M/118M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
> project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
> found. -> [Help 1]
> $mvn package -Pdist -DskipTests=true -Dtar
> works
> The documentation needs to be updated for possible issues and resolutions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated YARN-1457:
--

Labels: mvn  (was: )

> YARN single node install issues on mvn clean install assembly:assembly on 
> mapreduce project
> ---
>
> Key: YARN-1457
> URL: https://issues.apache.org/jira/browse/YARN-1457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.5-alpha
>Reporter: Rekha Joshi
>Priority: Minor
>  Labels: mvn
> Attachments: yarn-mvn-mapreduce.txt
>
>
> YARN single node install - 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
> On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
> clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
> protobuf)
> But on hadoop-mapreduce-project mvn install fails for tests with below errors
> $ mvn clean install assembly:assembly -Pnative
> errors as in atatched yarn-mvn-mapreduce,txt
> On $mvn clean install assembly:assembly  -DskipTests
> Reactor Summary:
> [INFO] 
> [INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
> [INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
> [INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
> [INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
> [INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
> [INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
> [INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
> [INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
> [INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
> [INFO] hadoop-mapreduce .. FAILURE [10.107s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 49.606s
> [INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
> [INFO] Final Memory: 34M/118M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
> project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
> found. -> [Help 1]
> $mvn package -Pdist -DskipTests=true -Dtar
> works
> The documentation needs to be updated for possible issues and resolutions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated YARN-1457:
--

Component/s: (was: site)
 documentation

> YARN single node install issues on mvn clean install assembly:assembly on 
> mapreduce project
> ---
>
> Key: YARN-1457
> URL: https://issues.apache.org/jira/browse/YARN-1457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.5-alpha
>Reporter: Rekha Joshi
>Priority: Minor
>  Labels: mvn
> Attachments: yarn-mvn-mapreduce.txt
>
>
> YARN single node install - 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
> On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
> clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
> protobuf)
> But on hadoop-mapreduce-project mvn install fails for tests with below errors
> $ mvn clean install assembly:assembly -Pnative
> errors as in atatched yarn-mvn-mapreduce,txt
> On $mvn clean install assembly:assembly  -DskipTests
> Reactor Summary:
> [INFO] 
> [INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
> [INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
> [INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
> [INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
> [INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
> [INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
> [INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
> [INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
> [INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
> [INFO] hadoop-mapreduce .. FAILURE [10.107s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 49.606s
> [INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
> [INFO] Final Memory: 34M/118M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
> project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
> found. -> [Help 1]
> $mvn package -Pdist -DskipTests=true -Dtar
> works
> The documentation needs to be updated for possible issues and resolutions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1462) AHS API and JHS changes to handle tags for completed MR jobs

2013-11-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1462:
---

Target Version/s: 2.3.0

> AHS API and JHS changes to handle tags for completed MR jobs
> 
>
> Key: YARN-1462
> URL: https://issues.apache.org/jira/browse/YARN-1462
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.2.0
>Reporter: Karthik Kambatla
>
> AHS related work for tags. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1462) AHS API and JHS changes to handle tags for completed MR jobs

2013-11-28 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-1462:
--

 Summary: AHS API and JHS changes to handle tags for completed MR 
jobs
 Key: YARN-1462
 URL: https://issues.apache.org/jira/browse/YARN-1462
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Karthik Kambatla


AHS related work for tags. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1461) RM API and RM changes to handle tags for running jobs

2013-11-28 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-1461:
--

 Summary: RM API and RM changes to handle tags for running jobs
 Key: YARN-1461
 URL: https://issues.apache.org/jira/browse/YARN-1461
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1390) Provide a way to capture source of an application to be queried through REST or Java Client APIs

2013-11-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835216#comment-13835216
 ] 

Karthik Kambatla commented on YARN-1390:


As proposed, let us handle the bulk of this on YARN-1399. Leaving this JIRA 
open to handle any pending work specific to lineage information.

> Provide a way to capture source of an application to be queried through REST 
> or Java Client APIs
> 
>
> Key: YARN-1390
> URL: https://issues.apache.org/jira/browse/YARN-1390
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api
>Affects Versions: 2.2.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>
> In addition to other fields like application-type (added in YARN-563), it is 
> useful to have an applicationSource field to track the source of an 
> application. The application source can be useful in (1) fetching only those 
> applications a user is interested in, (2) potentially adding source-specific 
> optimizations in the future. 
> Examples of sources are: User-defined project names, Pig, Hive, Oozie, Sqoop 
> etc.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1028) Add FailoverProxyProvider like capability to RMProxy

2013-11-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835213#comment-13835213
 ] 

Karthik Kambatla commented on YARN-1028:


Sorry for the digression. After discussion on HADOOP-10127, I think it is best 
to not handle the delay in failover in this JIRA. Created YARN-1460 for a more 
comprehensive overhaul of the connections from Client/ AM/ NM to the RM.

Will post a follow-up patch that captures any of the remaining work.

> Add FailoverProxyProvider like capability to RMProxy
> 
>
> Key: YARN-1028
> URL: https://issues.apache.org/jira/browse/YARN-1028
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bikas Saha
>Assignee: Karthik Kambatla
> Attachments: yarn-1028-1.patch, yarn-1028-draft-cumulative.patch
>
>
> RMProxy layer currently abstracts RM discovery and implements it by looking 
> up service information from configuration. Motivated by HDFS and using 
> existing classes from Common, we can add failover proxy providers that may 
> provide RM discovery in extensible ways.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1318) Promote AdminService to an Always-On service and merge in RMHAProtocolService

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835211#comment-13835211
 ] 

Hadoop QA commented on YARN-1318:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616336/yarn-1318-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2563//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2563//console

This message is automatically generated.

> Promote AdminService to an Always-On service and merge in RMHAProtocolService
> -
>
> Key: YARN-1318
> URL: https://issues.apache.org/jira/browse/YARN-1318
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
>  Labels: ha
> Attachments: yarn-1318-0.patch, yarn-1318-1.patch, yarn-1318-2.patch, 
> yarn-1318-2.patch, yarn-1318-3.patch, yarn-1318-4.patch, yarn-1318-4.patch, 
> yarn-1318-5.patch, yarn-1318-6.patch
>
>
> Per discussion in YARN-1068, we want AdminService to handle HA-admin 
> operations in addition to the regular non-HA admin operations. To facilitate 
> this, we need to move AdminService an Always-On service. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1460) Add YARN specific ipc-client configs and revisit retry mechanisms

2013-11-28 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-1460:
--

 Summary: Add YARN specific ipc-client configs and revisit retry 
mechanisms
 Key: YARN-1460
 URL: https://issues.apache.org/jira/browse/YARN-1460
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


As per discussions on HADOOP-10127, it would be nice to YARN specific configs 
for {[ipc.client.connect.max.retries}} and 
{{ipc.client.connect.retry.interval}}, and subsequently revisit the actual 
retry mechanisms in RMProxy, ClientRMProxy and ServerRMProxy that works for HA 
and non-HA configurations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1460) Add YARN specific ipc-client configs and revisit retry mechanisms

2013-11-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1460:
---

Description: As per discussions on HADOOP-10127, it would be nice to YARN 
specific configs for {{ipc.client.connect.max.retries}} and 
{{ipc.client.connect.retry.interval}}, and subsequently revisit the actual 
retry mechanisms in RMProxy, ClientRMProxy and ServerRMProxy that works for HA 
and non-HA configurations.  (was: As per discussions on HADOOP-10127, it would 
be nice to YARN specific configs for {[ipc.client.connect.max.retries}} and 
{{ipc.client.connect.retry.interval}}, and subsequently revisit the actual 
retry mechanisms in RMProxy, ClientRMProxy and ServerRMProxy that works for HA 
and non-HA configurations.)

> Add YARN specific ipc-client configs and revisit retry mechanisms
> -
>
> Key: YARN-1460
> URL: https://issues.apache.org/jira/browse/YARN-1460
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>
> As per discussions on HADOOP-10127, it would be nice to YARN specific configs 
> for {{ipc.client.connect.max.retries}} and 
> {{ipc.client.connect.retry.interval}}, and subsequently revisit the actual 
> retry mechanisms in RMProxy, ClientRMProxy and ServerRMProxy that works for 
> HA and non-HA configurations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1458) hadoop2.2.0 fairscheduler ResourceManager Event Processor thread blocked

2013-11-28 Thread qingwu.fu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835202#comment-13835202
 ] 

qingwu.fu commented on YARN-1458:
-

hi all,
 Here's other phenomena :
 1.   If some one submit job, the resourcemanager accepts it, but the 
job doesn’t run. In the meantime, the resourcemanager print a lot logs like 
“2013-11-27 14:27:02,258 ERROR 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
Request for appInfo of unknown attemptappattempt_1384743376038_1121_01”, 
and the fairscheduler doesn’t print hearbeat log to 
${HADOOP_HOME}/logs/fairscheduler/hadoop-{user}-fairscheduler.log
 2.   The fairscheduler ui can’t be opened and response 500 error.

And here's resourcemanager log when this error appearing:
 Here's  normal logs:
2013-11-27 14:25:36,515 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with 
id 1120 submitted by user root
2013-11-27 14:25:36,515 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing 
application with id application_1384743376038_1120
2013-11-27 14:25:36,515 INFO 
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root 
IP=192.168.24.101   OPERATION=Submit Application Request
TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1384743376038_1120
2013-11-27 14:25:36,515 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: 
application_1384743376038_1120 State change from NEW to NEW_SAVING
2013-11-27 14:25:36,515 INFO 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing 
info for app: application_1384743376038_1120
2013-11-27 14:25:36,516 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: 
application_1384743376038_1120 State change from NEW_SAVING to SUBMITTED
2013-11-27 14:25:36,516 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: 
Registering app attempt : appattempt_1384743376038_1120_01
2013-11-27 14:25:36,516 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
appattempt_1384743376038_1120_01 State change from NEW to SUBMITTED
2013-11-27 14:25:36,516 INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
Application Submission: appattempt_1384743376038_1120_01, user: root, 
currently active: 2
2013-11-27 14:25:36,516 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: 
appattempt_1384743376038_1120_01 State change from SUBMITTED to SCHEDULED
2013-11-27 14:25:36,516 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: 
application_1384743376038_1120 State change from SUBMITTED to ACCEPTED
2013-11-27 14:25:36,816 INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable: 
Node offered to app: application_1384743376038_1120 reserved: false
 
 
Abnormal logs:  these logs doesn’t contain the log like :
 2013-11-27 14:25:36,516 INFO 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
Application Submission: appattempt_1384743376038_1120_01, user: root, 
currently active: 2
 
Here is abnormal logs:
2013-11-27 14:27:01,391 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new 
applicationId: 1122
2013-11-27 14:27:01,391 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new 
applicationId: 1121
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with 
id 1121 submitted by user yangping.wu
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with 
id 1122 submitted by user yangping.wu
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=yangping.wu   
   IP=192.168.24.101 OPERATION=Submit Application Request
TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1384743376038_1121
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing 
application with id application_1384743376038_1122
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=yangping.wu   
   IP=192.168.24.101
   OPERATION=Submit Application RequestTARGET=ClientRMService  
RESULT=SUCCESS  APPID=application_1384743376038_1122
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: 
application_1384743376038_1122 State change from NEW to NEW_SAVING
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing 
application with id application_1384743376038_1121
2013-11-27 14:27:02,252 INFO 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing 
info for app: application_1384743376038_1122
2013-11-27 14:27:02,252 INFO

[jira] [Updated] (YARN-1318) Promote AdminService to an Always-On service and merge in RMHAProtocolService

2013-11-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1318:
---

Attachment: yarn-1318-6.patch

Updated patch to address [~vinodkv]'s latest set of comments. 

Created YARN-1459 to handle superusergroups and usergoups during failover.

> Promote AdminService to an Always-On service and merge in RMHAProtocolService
> -
>
> Key: YARN-1318
> URL: https://issues.apache.org/jira/browse/YARN-1318
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.3.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
>  Labels: ha
> Attachments: yarn-1318-0.patch, yarn-1318-1.patch, yarn-1318-2.patch, 
> yarn-1318-2.patch, yarn-1318-3.patch, yarn-1318-4.patch, yarn-1318-4.patch, 
> yarn-1318-5.patch, yarn-1318-6.patch
>
>
> Per discussion in YARN-1068, we want AdminService to handle HA-admin 
> operations in addition to the regular non-HA admin operations. To facilitate 
> this, we need to move AdminService an Always-On service. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues

2013-11-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1241:
---

Hadoop Flags: Reviewed

> In Fair Scheduler maxRunningApps does not work for non-leaf queues
> --
>
> Key: YARN-1241
> URL: https://issues.apache.org/jira/browse/YARN-1241
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: YARN-1241-1.patch, YARN-1241-10.patch, 
> YARN-1241-11.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, 
> YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241-7.patch, YARN-1241-8.patch, 
> YARN-1241-9.patch, YARN-1241.patch
>
>
> Setting the maxRunningApps property on a parent queue should make it that the 
> sum of apps in all subqueues can't exceed it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1459) Handle supergroups, usergroups and ACLs across RMs during failover

2013-11-28 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-1459:
--

 Summary: Handle supergroups, usergroups and ACLs across RMs during 
failover
 Key: YARN-1459
 URL: https://issues.apache.org/jira/browse/YARN-1459
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.2.0
Reporter: Karthik Kambatla


The supergroups, usergroups and ACL configurations are per RM and might have 
been changed while the RM is running. After failing over, the new Active RM 
should have the latest configuration from the previously Active RM.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1454) TestRMRestart.testRMDelegationTokenRestoredOnRMRestart is failing intermittently

2013-11-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835192#comment-13835192
 ] 

Karthik Kambatla commented on YARN-1454:


I believe the fix is to add a sleep in the test: 

{code}
  Long renewDateBeforeRenew = allTokensRM2.get(dtId1);
  try{
 +  // Sleep for one millisecond to make sure renewDataAfterRenew is greater
 +  Thread.sleep(1);
// renew recovered token
rm2.getRMDTSecretManager().renewToken(token1, "renewer1");
  } catch(Exception e) {
{code}

> TestRMRestart.testRMDelegationTokenRestoredOnRMRestart is failing 
> intermittently 
> -
>
> Key: YARN-1454
> URL: https://issues.apache.org/jira/browse/YARN-1454
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues

2013-11-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835182#comment-13835182
 ] 

Karthik Kambatla commented on YARN-1241:


access *yet*

> In Fair Scheduler maxRunningApps does not work for non-leaf queues
> --
>
> Key: YARN-1241
> URL: https://issues.apache.org/jira/browse/YARN-1241
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: YARN-1241-1.patch, YARN-1241-10.patch, 
> YARN-1241-11.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, 
> YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241-7.patch, YARN-1241-8.patch, 
> YARN-1241-9.patch, YARN-1241.patch
>
>
> Setting the maxRunningApps property on a parent queue should make it that the 
> sum of apps in all subqueues can't exceed it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues

2013-11-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835181#comment-13835181
 ] 

Karthik Kambatla commented on YARN-1241:


TestRMRestart failure is unrelated - I believe my patch on YARN-1318 fixes it. 

The latest patch here looks good to me. +1. [~sandyr] - I haven't gotten my svn 
access it, do you mind taking care of the commit itself.

> In Fair Scheduler maxRunningApps does not work for non-leaf queues
> --
>
> Key: YARN-1241
> URL: https://issues.apache.org/jira/browse/YARN-1241
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: YARN-1241-1.patch, YARN-1241-10.patch, 
> YARN-1241-11.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, 
> YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241-7.patch, YARN-1241-8.patch, 
> YARN-1241-9.patch, YARN-1241.patch
>
>
> Setting the maxRunningApps property on a parent queue should make it that the 
> sum of apps in all subqueues can't exceed it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues

2013-11-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned YARN-1241:
--

Assignee: Karthik Kambatla  (was: Sandy Ryza)

> In Fair Scheduler maxRunningApps does not work for non-leaf queues
> --
>
> Key: YARN-1241
> URL: https://issues.apache.org/jira/browse/YARN-1241
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Karthik Kambatla
> Attachments: YARN-1241-1.patch, YARN-1241-10.patch, 
> YARN-1241-11.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, 
> YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241-7.patch, YARN-1241-8.patch, 
> YARN-1241-9.patch, YARN-1241.patch
>
>
> Setting the maxRunningApps property on a parent queue should make it that the 
> sum of apps in all subqueues can't exceed it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1458) hadoop2.2.0 fairscheduler ResourceManager Event Processor thread blocked

2013-11-28 Thread qingwu.fu (JIRA)
qingwu.fu created YARN-1458:
---

 Summary: hadoop2.2.0 fairscheduler ResourceManager Event Processor 
thread blocked
 Key: YARN-1458
 URL: https://issues.apache.org/jira/browse/YARN-1458
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.2.0
 Environment: Centos 2.6.18-238.19.1.el5 X86_64
hadoop2.2.0
Reporter: qingwu.fu


The ResourceManager$SchedulerEventDispatcher$EventProcessor blocked when 
clients submit lots jobs, it is not easy to reapear. We run the test cluster 
for days to reapear it. The output of  jstack command on resourcemanager pid:
 "ResourceManager Event Processor" prio=10 tid=0x2aaab0c5f000 nid=0x5dd3 
waiting for monitor entry [0x43aa9000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplication(FairScheduler.java:671)
- waiting to lock <0x00070026b6e0> (a 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1023)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:440)
at java.lang.Thread.run(Thread.java:744)
……
"FairSchedulerUpdateThread" daemon prio=10 tid=0x2aaab0a2c800 nid=0x5dc8 
runnable [0x433a2000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getAppWeight(FairScheduler.java:545)
- locked <0x00070026b6e0> (a 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.getWeights(AppSchedulable.java:129)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShare(ComputeFairShares.java:143)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFairShares.java:131)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeShares(ComputeFairShares.java:102)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy.computeShares(FairSharePolicy.java:119)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.recomputeShares(FSLeafQueue.java:100)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.recomputeShares(FSParentQueue.java:62)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.update(FairScheduler.java:282)
- locked <0x00070026b6e0> (a 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$UpdateThread.run(FairScheduler.java:255)
at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1447) Common PB types define for container resource change

2013-11-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835139#comment-13835139
 ] 

Wangda Tan commented on YARN-1447:
--

Thanks your review,
I agree with your idea, the additional "Context" not bring any more 
information, and flatten such objects are easier for us to add more 
information. 
And I think for AM->NM increase, we don't need ContainerResourceIncrease, just 
a ContainerToken will be enough. Like what community did in 
StartContainerRequest, the ContainerId and Resource are already included in 
ContainerToken. Agree?
I'll change code cause warnings like findbugs and license and upload a new 
patch later. And please let me know if you've any other ideas :)


> Common PB types define for container resource change
> 
>
> Key: YARN-1447
> URL: https://issues.apache.org/jira/browse/YARN-1447
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.2.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: yarn-1447.1.patch
>
>
> As described in YARN-1197, we need add some common PB types for container 
> resource change, like ResourceChangeContext, etc. These types will be both 
> used by RM/NM protocols



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah reopened YARN-1457:
---


> YARN single node install issues on mvn clean install assembly:assembly on 
> mapreduce project
> ---
>
> Key: YARN-1457
> URL: https://issues.apache.org/jira/browse/YARN-1457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: site
>Affects Versions: 2.0.5-alpha
>Reporter: Rekha Joshi
>Priority: Minor
> Attachments: yarn-mvn-mapreduce.txt
>
>
> YARN single node install - 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
> On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
> clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
> protobuf)
> But on hadoop-mapreduce-project mvn install fails for tests with below errors
> $ mvn clean install assembly:assembly -Pnative
> errors as in atatched yarn-mvn-mapreduce,txt
> On $mvn clean install assembly:assembly  -DskipTests
> Reactor Summary:
> [INFO] 
> [INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
> [INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
> [INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
> [INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
> [INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
> [INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
> [INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
> [INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
> [INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
> [INFO] hadoop-mapreduce .. FAILURE [10.107s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 49.606s
> [INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
> [INFO] Final Memory: 34M/118M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
> project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
> found. -> [Help 1]
> $mvn package -Pdist -DskipTests=true -Dtar
> works
> The documentation needs to be updated for possible issues and resolutions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah resolved YARN-1457.
---

Resolution: Invalid

Native builds are only supported on Linux. For Mac, -Pnative is not supported.

> YARN single node install issues on mvn clean install assembly:assembly on 
> mapreduce project
> ---
>
> Key: YARN-1457
> URL: https://issues.apache.org/jira/browse/YARN-1457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: site
>Affects Versions: 2.0.5-alpha
>Reporter: Rekha Joshi
>Priority: Minor
> Attachments: yarn-mvn-mapreduce.txt
>
>
> YARN single node install - 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
> On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
> clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
> protobuf)
> But on hadoop-mapreduce-project mvn install fails for tests with below errors
> $ mvn clean install assembly:assembly -Pnative
> errors as in atatched yarn-mvn-mapreduce,txt
> On $mvn clean install assembly:assembly  -DskipTests
> Reactor Summary:
> [INFO] 
> [INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
> [INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
> [INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
> [INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
> [INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
> [INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
> [INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
> [INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
> [INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
> [INFO] hadoop-mapreduce .. FAILURE [10.107s]
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 49.606s
> [INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
> [INFO] Final Memory: 34M/118M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
> project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
> found. -> [Help 1]
> $mvn package -Pdist -DskipTests=true -Dtar
> works
> The documentation needs to be updated for possible issues and resolutions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1447) Common PB types define for container resource change

2013-11-28 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835037#comment-13835037
 ] 

Sandy Ryza commented on YARN-1447:
--

Thanks Wangda.  Some comments:
The new files are missing apache license headers

The word "Context" in ResourceChangeContext and ResourceIncreaseContext isn't 
very informative.  Can we instead call them ContainerResourceChange and 
ContainerResourceIncrease?

(Container?)ResourceIncrease(Context?) sounds like a kind of 
(Container?)ResourceChange(Context?), when in fact the former wraps the latter 
with additional information.  This seems a little confusing to me.  Thinking 
aloud, the design uses ResourceChange for four things: sending an increase 
request from the AM to the RM, sending an increase allocation from the RM to 
the AM, sending an increase from the AM to the NM, and sending a decrease from 
the AM to the NM.  I'm also worried that we might want to later add additional 
information to an increase request that wouldn't make sense in the other 
contexts. 

I'm wondering whether we should get rid of ResourceChange entirely and add in a 
ResourceIncreaseRequest proto?  We would then have:
* AM->RM increase request: ContainerResourceIncreaseRequest, which includes a 
ContainerId and a Resource.
* RM->AM increase allocation: ContainerResourceIncrease, which includes a 
ContainerId, a Resource, and a Token
* AM->NM increase: ContainerResourceIncrease, as above.
* AM->NM decrease: ContainerResourceDecrease, which includes a ContainerId and 
a Resource.

Thoughts?  Sorry to change things up at this point, but as we won't be able to 
modify this in the future, I think it's worth putting in the time to get it 
right on the first try.

{code}
+
Assert.assertTrue(contextRecover.getExistingContainerId().equals(containerId));
{code}
Can use assertEquals here.  This applies to a few other places as well.

> Common PB types define for container resource change
> 
>
> Key: YARN-1447
> URL: https://issues.apache.org/jira/browse/YARN-1447
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.2.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: yarn-1447.1.patch
>
>
> As described in YARN-1197, we need add some common PB types for container 
> resource change, like ResourceChangeContext, etc. These types will be both 
> used by RM/NM protocols



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1447) Common PB types define for container resource change

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835014#comment-13835014
 ] 

Hadoop QA commented on YARN-1447:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12615785/yarn-1447.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2562//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2562//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2562//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-api.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/2562//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2562//console

This message is automatically generated.

> Common PB types define for container resource change
> 
>
> Key: YARN-1447
> URL: https://issues.apache.org/jira/browse/YARN-1447
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.2.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: yarn-1447.1.patch
>
>
> As described in YARN-1197, we need add some common PB types for container 
> resource change, like ResourceChangeContext, etc. These types will be both 
> used by RM/NM protocols



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues

2013-11-28 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13835008#comment-13835008
 ] 

Sandy Ryza commented on YARN-1241:
--

Test failure is unrelated

> In Fair Scheduler maxRunningApps does not work for non-leaf queues
> --
>
> Key: YARN-1241
> URL: https://issues.apache.org/jira/browse/YARN-1241
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
> Attachments: YARN-1241-1.patch, YARN-1241-10.patch, 
> YARN-1241-11.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, 
> YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241-7.patch, YARN-1241-8.patch, 
> YARN-1241-9.patch, YARN-1241.patch
>
>
> Setting the maxRunningApps property on a parent queue should make it that the 
> sum of apps in all subqueues can't exceed it



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-546) mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0

2013-11-28 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834999#comment-13834999
 ] 

Sandy Ryza commented on YARN-546:
-

*off and on at runtime

> mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0
> -
>
> Key: YARN-546
> URL: https://issues.apache.org/jira/browse/YARN-546
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.0.3-alpha
>Reporter: Lohit Vijayarenu
>Assignee: Sandy Ryza
> Attachments: YARN-546.1.patch
>
>
> Hadoop 1.0 supported an option to turn on/off FairScheduler event logging 
> using mapred.fairscheduler.eventlog.enabled. In Hadoop 2.0, it looks like 
> this option has been removed (or not ported?) which causes event logging to 
> be enabled by default and there is no way to turn it off.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (YARN-546) mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0

2013-11-28 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza reopened YARN-546:
-

  Assignee: Sandy Ryza

I've spoken to a few people that still see the event log as valuable.  
Independent of what we choose to do in YARN-1383, we should make disabling it 
an option.  Will post a patch here soon.

I'm going to put the option into the allocations file instead of as a yarn 
config so that it can be turned off and on.

> mapred.fairscheduler.eventlog.enabled removed from Hadoop 2.0
> -
>
> Key: YARN-546
> URL: https://issues.apache.org/jira/browse/YARN-546
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.0.3-alpha
>Reporter: Lohit Vijayarenu
>Assignee: Sandy Ryza
> Attachments: YARN-546.1.patch
>
>
> Hadoop 1.0 supported an option to turn on/off FairScheduler event logging 
> using mapred.fairscheduler.eventlog.enabled. In Hadoop 2.0, it looks like 
> this option has been removed (or not ported?) which causes event logging to 
> be enabled by default and there is no way to turn it off.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1332) In TestAMRMClient, replace assertTrue with assertEquals where possible

2013-11-28 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834992#comment-13834992
 ] 

Sandy Ryza commented on YARN-1332:
--

+1

> In TestAMRMClient, replace assertTrue with assertEquals where possible
> --
>
> Key: YARN-1332
> URL: https://issues.apache.org/jira/browse/YARN-1332
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Sandy Ryza
>Assignee: Sebastian Wong
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-1332-3.patch
>
>
> TestAMRMClient uses a lot of "assertTrue(amClient.ask.size() == 0)" where 
> "assertEquals(0, amClient.ask.size())" would make it easier to see why it's 
> failing at a glance.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1307) Rethink znode structure for RM HA

2013-11-28 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834957#comment-13834957
 ] 

Tsuyoshi OZAWA commented on YARN-1307:
--

[~jianhe] and [~bikassaha], do you have additional comments?

> Rethink znode structure for RM HA
> -
>
> Key: YARN-1307
> URL: https://issues.apache.org/jira/browse/YARN-1307
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Attachments: YARN-1307.1.patch, YARN-1307.2.patch, YARN-1307.3.patch, 
> YARN-1307.4-2.patch, YARN-1307.4-3.patch, YARN-1307.4.patch, 
> YARN-1307.5.patch, YARN-1307.6.patch, YARN-1307.7.patch, YARN-1307.8.patch
>
>
> Rethink for znode structure for RM HA is proposed in some JIRAs(YARN-659, 
> YARN-1222). The motivation of this JIRA is quoted from Bikas' comment in 
> YARN-1222:
> {quote}
> We should move to creating a node hierarchy for apps such that all znodes for 
> an app are stored under an app znode instead of the app root znode. This will 
> help in removeApplication and also in scaling better on ZK. The earlier code 
> was written this way to ensure create/delete happens under a root znode for 
> fencing. But given that we have moved to multi-operations globally, this isnt 
> required anymore.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1239) Save version information in the state store

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834824#comment-13834824
 ] 

Hudson commented on YARN-1239:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1596 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1596/])
YARN-1239. Modified ResourceManager state-store implementations to start 
storing version numbers. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546229)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/NullRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateVersionIncompatibleException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/RMStateVersion.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/RMStateVersionPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java


> Save version information in the state store
> ---
>
> Key: YARN-1239
> URL: https://issues.apache.org/jira/browse/YARN-1239
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Jian He
> Fix For: 2.3.0
>
> Attachments: YARN-1239.1.patch, YARN-1239.2.patch, YARN-1239.3.patch, 
> YARN-1239.4.patch, YARN-1239.4.patch, YARN-1239.5.patch, YARN-1239.6.patch, 
> YARN-1239.7.patch, YARN-1239.8.patch, YARN-1239.8.patch, YARN-1239.9.patch, 
> YARN-1239.patch
>
>
> When creating root dir for the first time we should write version 1. If root 
> dir exists then we should check that the version in the state store matches 
> the version from config.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1239) Save version information in the state store

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834812#comment-13834812
 ] 

Hudson commented on YARN-1239:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1622 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1622/])
YARN-1239. Modified ResourceManager state-store implementations to start 
storing version numbers. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546229)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/NullRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateVersionIncompatibleException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/RMStateVersion.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/RMStateVersionPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java


> Save version information in the state store
> ---
>
> Key: YARN-1239
> URL: https://issues.apache.org/jira/browse/YARN-1239
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Jian He
> Fix For: 2.3.0
>
> Attachments: YARN-1239.1.patch, YARN-1239.2.patch, YARN-1239.3.patch, 
> YARN-1239.4.patch, YARN-1239.4.patch, YARN-1239.5.patch, YARN-1239.6.patch, 
> YARN-1239.7.patch, YARN-1239.8.patch, YARN-1239.8.patch, YARN-1239.9.patch, 
> YARN-1239.patch
>
>
> When creating root dir for the first time we should write version 1. If root 
> dir exists then we should check that the version in the state store matches 
> the version from config.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1239) Save version information in the state store

2013-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834722#comment-13834722
 ] 

Hudson commented on YARN-1239:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #405 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/405/])
YARN-1239. Modified ResourceManager state-store implementations to start 
storing version numbers. Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1546229)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/MemoryRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/NullRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateVersionIncompatibleException.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/RMStateVersion.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/RMStateVersionPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java


> Save version information in the state store
> ---
>
> Key: YARN-1239
> URL: https://issues.apache.org/jira/browse/YARN-1239
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>Assignee: Jian He
> Fix For: 2.3.0
>
> Attachments: YARN-1239.1.patch, YARN-1239.2.patch, YARN-1239.3.patch, 
> YARN-1239.4.patch, YARN-1239.4.patch, YARN-1239.5.patch, YARN-1239.6.patch, 
> YARN-1239.7.patch, YARN-1239.8.patch, YARN-1239.8.patch, YARN-1239.9.patch, 
> YARN-1239.patch
>
>
> When creating root dir for the first time we should write version 1. If root 
> dir exists then we should check that the version in the state store matches 
> the version from config.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated YARN-1457:
--

Description: 
YARN single node install - 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
protobuf)
But on hadoop-mapreduce-project mvn install fails for tests with below errors
$ mvn clean install assembly:assembly -Pnative

errors as in atatched yarn-mvn-mapreduce,txt

On $mvn clean install assembly:assembly  -DskipTests
Reactor Summary:
[INFO] 
[INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
[INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
[INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
[INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
[INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
[INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
[INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
[INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
[INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
[INFO] hadoop-mapreduce .. FAILURE [10.107s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 49.606s
[INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
[INFO] Final Memory: 34M/118M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
found. -> [Help 1]

$mvn package -Pdist -DskipTests=true -Dtar
works

The documentation needs to be updated for possible issues and resolutions.


  was:
YARN single node install - 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
protobuf)
But on hadoop-mapreduce-project mvn install fails for tests with below errors
$ mvn clean install assembly:assembly -Pnative

errors as in atatched yarn-mvn-mapreduce,txt

On $mvn clean install assembly:assembly  -DskipTests
Reactor Summary:
[INFO] 
[INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
[INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
[INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
[INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
[INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
[INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
[INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
[INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
[INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
[INFO] hadoop-mapreduce .. FAILURE [10.107s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 49.606s
[INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
[INFO] Final Memory: 34M/118M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
found. -> [Help 1]



> YARN single node install issues on mvn clean install assembly:assembly on 
> mapreduce project
> ---
>
> Key: YARN-1457
> URL: https://issues.apache.org/jira/browse/YARN-1457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: site
>Affects Versions: 2.0.5-alpha
>Reporter: Rekha Joshi
>Priority: Minor
> Attachments: yarn-mvn-mapreduce.txt
>
>
> YARN single node install - 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
> On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
> clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
> protobuf)
> But on hadoop-mapreduce-project mvn install fails for tests with below erro

[jira] [Updated] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated YARN-1457:
--

Description: 
YARN single node install - 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
protobuf)
But on hadoop-mapreduce-project mvn install fails for tests with below errors
$ mvn clean install assembly:assembly -Pnative

errors as in atatched yarn-mvn-mapreduce,txt

On $mvn clean install assembly:assembly  -DskipTests
Reactor Summary:
[INFO] 
[INFO] hadoop-mapreduce-client ... SUCCESS [2.410s]
[INFO] hadoop-mapreduce-client-core .. SUCCESS [13.781s]
[INFO] hadoop-mapreduce-client-common  SUCCESS [8.486s]
[INFO] hadoop-mapreduce-client-shuffle ... SUCCESS [0.774s]
[INFO] hadoop-mapreduce-client-app ... SUCCESS [4.409s]
[INFO] hadoop-mapreduce-client-hs  SUCCESS [1.618s]
[INFO] hadoop-mapreduce-client-jobclient . SUCCESS [4.470s]
[INFO] hadoop-mapreduce-client-hs-plugins  SUCCESS [0.561s]
[INFO] Apache Hadoop MapReduce Examples .. SUCCESS [1.620s]
[INFO] hadoop-mapreduce .. FAILURE [10.107s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 49.606s
[INFO] Finished at: Thu Nov 28 16:20:52 GMT+05:30 2013
[INFO] Final Memory: 34M/118M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.3:assembly (default-cli) on 
project hadoop-mapreduce: Error reading assemblies: No assembly descriptors 
found. -> [Help 1]


  was:
YARN single node install - 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
protobuf)
But on hadoop-mapreduce-project mvn install fails for tests with below errors
$ mvn clean install assembly:assembly -Pnative


---
 T E S T S
---
Running org.apache.hadoop.mapred.TestTaskAttemptListenerImpl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.294 sec
Running org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler
Tests run: 5, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 5.386 sec <<< 
FAILURE!
testFirstFlushOnCompletionEvent(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
  Time elapsed: 2903 sec  <<< ERROR!
org.mockito.exceptions.misusing.NullInsteadOfMockException: 
Argument passed to verify() should be a mock but is null!
Examples of correct verifications:
verify(mock).someMethod();
verify(mock, times(10)).someMethod();
verify(mock, atLeastOnce()).someMethod();
Also, if you use @Mock annotation don't miss initMocks()
at 
org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testFirstFlushOnCompletionEvent(TestJobHistoryEventHandler.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefi

[jira] [Updated] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated YARN-1457:
--

Description: 
YARN single node install - 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
protobuf)
But on hadoop-mapreduce-project mvn install fails for tests with below errors
$ mvn clean install assembly:assembly -Pnative


---
 T E S T S
---
Running org.apache.hadoop.mapred.TestTaskAttemptListenerImpl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.294 sec
Running org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler
Tests run: 5, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 5.386 sec <<< 
FAILURE!
testFirstFlushOnCompletionEvent(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
  Time elapsed: 2903 sec  <<< ERROR!
org.mockito.exceptions.misusing.NullInsteadOfMockException: 
Argument passed to verify() should be a mock but is null!
Examples of correct verifications:
verify(mock).someMethod();
verify(mock, times(10)).someMethod();
verify(mock, atLeastOnce()).someMethod();
Also, if you use @Mock annotation don't miss initMocks()
at 
org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testFirstFlushOnCompletionEvent(TestJobHistoryEventHandler.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

<...>
Running org.apache.hadoop.mapreduce.v2.app.job.impl.TestJobImpl
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.112 sec <<< 
FAILURE!
org.apache.hadoop.mapreduce.v2.app.job.impl.TestJobImpl  Time elapsed: 111 sec  
<<< ERROR!
java.lang.Error: Unresolved compilation problem: 

at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TestJobImpl.setup(TestJobImpl.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)

[jira] [Updated] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rekha Joshi updated YARN-1457:
--

Attachment: yarn-mvn-mapreduce.txt

> YARN single node install issues on mvn clean install assembly:assembly on 
> mapreduce project
> ---
>
> Key: YARN-1457
> URL: https://issues.apache.org/jira/browse/YARN-1457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: site
>Affects Versions: 2.0.5-alpha
>Reporter: Rekha Joshi
>Priority: Minor
> Attachments: yarn-mvn-mapreduce.txt
>
>
> YARN single node install - 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
> On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
> clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
> protobuf)
> But on hadoop-mapreduce-project mvn install fails for tests with below errors
> $ mvn clean install assembly:assembly -Pnative
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.mapred.TestTaskAttemptListenerImpl
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.294 sec
> Running org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler
> Tests run: 5, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 5.386 sec <<< 
> FAILURE!
> testFirstFlushOnCompletionEvent(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
>   Time elapsed: 2903 sec  <<< ERROR!
> org.mockito.exceptions.misusing.NullInsteadOfMockException: 
> Argument passed to verify() should be a mock but is null!
> Examples of correct verifications:
> verify(mock).someMethod();
> verify(mock, times(10)).someMethod();
> verify(mock, atLeastOnce()).someMethod();
> Also, if you use @Mock annotation don't miss initMocks()
>   at 
> org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testFirstFlushOnCompletionEvent(TestJobHistoryEventHandler.java:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
> <...>
> Running org.apache.hadoop.mapreduce.v2.app.job.impl.TestJobImpl
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.112 sec <<< 
> FAILURE!
> or

[jira] [Created] (YARN-1457) YARN single node install issues on mvn clean install assembly:assembly on mapreduce project

2013-11-28 Thread Rekha Joshi (JIRA)
Rekha Joshi created YARN-1457:
-

 Summary: YARN single node install issues on mvn clean install 
assembly:assembly on mapreduce project
 Key: YARN-1457
 URL: https://issues.apache.org/jira/browse/YARN-1457
 Project: Hadoop YARN
  Issue Type: Bug
  Components: site
Affects Versions: 2.0.5-alpha
Reporter: Rekha Joshi
Priority: Minor


YARN single node install - 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

On Mac OSX 10.7.3, Java 1.6, Protobuf 2.5.0 and hadoop-2.0.5-alpha.tar,  mvn 
clean install -DskipTests succeds after a YARN fix on pom.xml(using 2.5.0 
protobuf)
But on hadoop-mapreduce-project mvn install fails for tests with below errors
$ mvn clean install assembly:assembly -Pnative

---
 T E S T S
---
Running org.apache.hadoop.mapred.TestTaskAttemptListenerImpl
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.294 sec
Running org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler
Tests run: 5, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 5.386 sec <<< 
FAILURE!
testFirstFlushOnCompletionEvent(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
  Time elapsed: 2903 sec  <<< ERROR!
org.mockito.exceptions.misusing.NullInsteadOfMockException: 
Argument passed to verify() should be a mock but is null!
Examples of correct verifications:
verify(mock).someMethod();
verify(mock, times(10)).someMethod();
verify(mock, atLeastOnce()).someMethod();
Also, if you use @Mock annotation don't miss initMocks()
at 
org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testFirstFlushOnCompletionEvent(TestJobHistoryEventHandler.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testMaxUnflushedCompletionEvents(org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler)
  Time elapsed: 974 sec  <<< ERROR!
org.mockito.exceptions.misusing.NullInsteadOfMockException: 
Argument passed to verify() should be a mock but is null!
Examples of correct verifications:
verify(mock).someMethod();
verify(mock, times(10)).someMethod();
verify(mock, atLeastOnce()).someMethod();
Also, if you use @Mock annotation don't miss initMocks()
at 
org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler.testMaxUn

[jira] [Commented] (YARN-1456) IntelliJ IDEA gets dependencies wrong for hadoop-yarn-server-resourcemanager

2013-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13834695#comment-13834695
 ] 

Hadoop QA commented on YARN-1456:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616222/YARN-1456-001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStoreZKClientConnections

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/2561//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2561//console

This message is automatically generated.

> IntelliJ IDEA gets dependencies wrong for  hadoop-yarn-server-resourcemanager
> -
>
> Key: YARN-1456
> URL: https://issues.apache.org/jira/browse/YARN-1456
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.2.0
> Environment: IntelliJ IDEA 12.x & 13.x beta
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: YARN-1456-001.patch
>
>
> When IntelliJ IDEA imports the hadoop POMs into the IDE, somehow it fails to 
> pick up all the transitive dependencies of the yarn-client, and so can't 
> resolve commons logging, com.google.* classes and the like.
> While this is probably an IDEA bug, it does stop you building Hadoop from 
> inside the IDE, making debugging significantly harder



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1456) IntelliJ IDEA gets dependencies wrong for hadoop-yarn-server-resourcemanager

2013-11-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-1456:
-

Attachment: YARN-1456-001.patch

The fix for this is trivial: explicitly add a hadoop-commons dependency to the 
{{hadoop-yarn-server-resourcemanager/pom.xml}}. 

This does not add any more dependencies to the RM -just lets IDEA pick up the 
transitive dependencies properly.

The patch does this and tweaks the namespace declaration of the 
{{hadoop-yarn-client/pom.xml}} so the IDE doesn't complain about Xml schemas

> IntelliJ IDEA gets dependencies wrong for  hadoop-yarn-server-resourcemanager
> -
>
> Key: YARN-1456
> URL: https://issues.apache.org/jira/browse/YARN-1456
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.2.0
> Environment: IntelliJ IDEA 12.x & 13.x beta
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: YARN-1456-001.patch
>
>
> When IntelliJ IDEA imports the hadoop POMs into the IDE, somehow it fails to 
> pick up all the transitive dependencies of the yarn-client, and so can't 
> resolve commons logging, com.google.* classes and the like.
> While this is probably an IDEA bug, it does stop you building Hadoop from 
> inside the IDE, making debugging significantly harder



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1456) IntelliJ IDEA gets dependencies wrong for hadoop-yarn-server-resourcemanager

2013-11-28 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-1456:


 Summary: IntelliJ IDEA gets dependencies wrong for  
hadoop-yarn-server-resourcemanager
 Key: YARN-1456
 URL: https://issues.apache.org/jira/browse/YARN-1456
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.2.0
 Environment: IntelliJ IDEA 12.x & 13.x beta
Reporter: Steve Loughran
Priority: Minor


When IntelliJ IDEA imports the hadoop POMs into the IDE, somehow it fails to 
pick up all the transitive dependencies of the yarn-client, and so can't 
resolve commons logging, com.google.* classes and the like.

While this is probably an IDEA bug, it does stop you building Hadoop from 
inside the IDE, making debugging significantly harder



--
This message was sent by Atlassian JIRA
(v6.1#6144)