Build failed in Jenkins: Hadoop-Common-trunk #393

2012-05-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/393/changes

Changes:

[umamahesh] HDFS-3286. When the threshold value for balancer is zero, 
unexpected output is displayed. Contributed by Ashish Singhi.

[umamahesh] HDFS-3275. Skip format for non-file based directories. Contributed 
by Amith D K.

[tomwhite] HADOOP-8308. Support cross-project Jenkins builds.

[szetszwo] HDFS-3293. Add toString(), equals(..) and hashCode() to JournalInfo. 
 Contributed by Hari Mankude

[bobby] HADOOP-8335. Improve Configuration's address handling (Daryn Sharp via 
bobby)

[bobby] HADOOP-8312. testpatch.sh should provide a simpler way to see which 
warnings changed (bobby)

[szetszwo] HADOOP-8330. Update TestSequenceFile.testCreateUsesFsArg() for 
HADOOP-8305.  Contributed by John George

[tucu] HADOOP-8325. Add a ShutdownHookManager to be used by different 
components instead of the JVM shutdownhook (tucu)

[bobby] HADOOP-8334. HttpServer sometimes returns incorrect port (Daryn Sharp 
via bobby)

[tgraves] MAPREDUCE-4206. Sorting by Last Health-Update on the RM nodes page 
sorts does not work correctly (Jonathon Eagles via tgraves)

[bobby] MAPREDUCE-4209. junit dependency in hadoop-mapreduce-client is missing 
scope test (Radim Kolar via bobby)

[tgraves] MAPREDUCE-3883. Document yarn.nodemanager.delete.debug-delay-sec 
configuration property (Eugene Koontz via tgraves)

--
[...truncated 44902 lines...]
[DEBUG]   (f) reactorProjects = [MavenProject: 
org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-annotations/pom.xml,
 MavenProject: org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-auth/pom.xml,
 MavenProject: org.apache.hadoop:hadoop-auth-examples:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml,
 MavenProject: org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/pom.xml,
 MavenProject: org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml]
[DEBUG]   (f) useDefaultExcludes = true
[DEBUG]   (f) useDefaultManifestFile = false
[DEBUG] -- end configuration --
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce from plugin realm 
ClassRealm[pluginorg.apache.maven.plugins:maven-enforcer-plugin:1.0, parent: 
sun.misc.Launcher$AppClassLoader@126b249]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce' with basic 
configurator --
[DEBUG]   (s) fail = true
[DEBUG]   (s) failFast = false
[DEBUG]   (f) ignoreCache = false
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG]   (s) version = [3.0.2,)
[DEBUG]   (s) version = 1.6
[DEBUG]   (s) rules = 
[org.apache.maven.plugins.enforcer.RequireMavenVersion@10666dd, 
org.apache.maven.plugins.enforcer.RequireJavaVersion@14b5885]
[DEBUG]   (s) session = org.apache.maven.execution.MavenSession@15ad36d
[DEBUG]   (s) skip = false
[DEBUG] -- end configuration --
[DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireMavenVersion
[DEBUG] Rule org.apache.maven.plugins.enforcer.RequireMavenVersion is cacheable.
[DEBUG] Key org.apache.maven.plugins.enforcer.RequireMavenVersion -937312197 
was found in the cache
[DEBUG] The cached results are still valid. Skipping the rule: 
org.apache.maven.plugins.enforcer.RequireMavenVersion
[DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireJavaVersion
[DEBUG] Rule org.apache.maven.plugins.enforcer.RequireJavaVersion is cacheable.
[DEBUG] Key org.apache.maven.plugins.enforcer.RequireJavaVersion 48569 was 
found in the cache
[DEBUG] The cached results are still valid. Skipping the rule: 
org.apache.maven.plugins.enforcer.RequireJavaVersion
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-site-plugin:3.0:attach-descriptor from plugin 
realm ClassRealm[pluginorg.apache.maven.plugins:maven-site-plugin:3.0, parent: 
sun.misc.Launcher$AppClassLoader@126b249]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-site-plugin:3.0:attach-descriptor' with basic 
configurator --
[DEBUG]   (f) basedir = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project
[DEBUG]   (f) inputEncoding = UTF-8
[DEBUG]   (f) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (f) 

Build failed in Jenkins: Hadoop-Common-0.23-Build #239

2012-05-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/239/changes

Changes:

[bobby] svn merge -c 1332427. FIXES: HADOOP-8335. Improve Configuration's 
address handling (Daryn Sharp via bobby)

[szetszwo] svn merge -c 1332363 from trunk for HADOOP-8330.

[bobby] svn merge -c 1332336. FIXES: HADOOP-8334. HttpServer sometimes returns 
incorrect port (Daryn Sharp via bobby)

[tgraves] merge -r 1332234:1332235 from branch-2. FIXES: MAPREDUCE-4206

[bobby] svn merge -c 1332226. FIXES: MAPREDUCE-4209. junit dependency in 
hadoop-mapreduce-client is missing scope test (Radim Kolar via bobby)

--
[...truncated 12314 lines...]
  [javadoc] Loading source files for package org.apache.hadoop.fs.shell...
  [javadoc] Loading source files for package org.apache.hadoop.fs.viewfs...
  [javadoc] Loading source files for package org.apache.hadoop.http...
  [javadoc] Loading source files for package org.apache.hadoop.http.lib...
  [javadoc] Loading source files for package org.apache.hadoop.io...
  [javadoc] Loading source files for package org.apache.hadoop.io.compress...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.bzip2...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.lz4...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.snappy...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.zlib...
  [javadoc] Loading source files for package org.apache.hadoop.io.file.tfile...
  [javadoc] Loading source files for package org.apache.hadoop.io.nativeio...
  [javadoc] Loading source files for package org.apache.hadoop.io.retry...
  [javadoc] Loading source files for package org.apache.hadoop.io.serializer...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.serializer.avro...
  [javadoc] Loading source files for package org.apache.hadoop.ipc...
  [javadoc] Loading source files for package org.apache.hadoop.ipc.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.jmx...
  [javadoc] Loading source files for package org.apache.hadoop.log...
  [javadoc] Loading source files for package org.apache.hadoop.log.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.file...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics.ganglia...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.jvm...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.spi...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.util...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.annotation...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.filter...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.impl...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.lib...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.sink.ganglia...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.source...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.util...
  [javadoc] Loading source files for package org.apache.hadoop.net...
  [javadoc] Loading source files for package org.apache.hadoop.record...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler.ant...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler.generated...
  [javadoc] Loading source files for package org.apache.hadoop.record.meta...
  [javadoc] Loading source files for package org.apache.hadoop.security...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.authorize...
  [javadoc] Loading source files for package org.apache.hadoop.security.token...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation...
  [javadoc] Loading source files for package org.apache.hadoop.tools...
  [javadoc] Loading source files for package org.apache.hadoop.util...
  [javadoc] Loading source files for package org.apache.hadoop.util.bloom...
  [javadoc] Loading source files for package org.apache.hadoop.util.hash...
  [javadoc] 2 errors
 [xslt] Processing 
https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-common/target/findbugsXml.xml
 to 
https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-common/target/site/findbugs.html
 [xslt] Loading stylesheet 
/home/jenkins/tools/findbugs/latest/src/xsl/default.xsl
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run 

[jira] [Created] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Owen O'Malley (JIRA)
Owen O'Malley created HADOOP-8338:
-

 Summary: Can't renew or cancel HDFS delegation tokens over secure 
RPC
 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley


The fetchdt tool is failing for secure deployments when given --renew or 
--cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-8339:
-

 Summary: jenkins complaining about 16 javadoc warnings 
 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves


See any of the mapreduce/hadoop jenkins reports recently and they all complain 
about 16 javadoc warnings.


-1 javadoc.  The javadoc tool appears to have generated 16 warning messages.

Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




File Busy or not check. Kindly Help

2012-05-01 Thread Ravi Shankar
Hi All,

I am trying to check that if a file is busy ,i.e. it is still being copied into 
DFS from Local, the MR Job should not run if it is, otherwise it should. What I 
have observed is that if the file is still being written to DFS, and I call the 
MR job on it, the job runs (which may include operations to move the file, and 
it is successful), after the completion of the MR Job, if the file is missing, 
i.e it has been moved by the MR job, the file copying operation throws an 
exception of file does not exist.

I need the MR job to wait until the file is completely copied and then the MR 
job should run on it.

Kindly help on this.

Thanks  Regards,
Ravi Shankar,
Cross Functional Services - Solution- Cloud
HCL Technologies Ltd. - Infrastructure Services Division
F 8,9 Sec - III, Noida, India - 201301. Cell - +91 995369





::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or contain viruses.The e mail and 
its contents (with or without referred
errors) shall therefore not attach any liability on the originator or HCL or 
its affiliates. Any views or opinions
presented in this email are solely those of the author and may not necessarily 
reflect the opinions of HCL or its
affiliates. Any form of reproduction, dissemination, copying, disclosure, 
Modification, distribution and/or publication
of this message without the prior written consent of the author of this e-mail 
is strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and attachments
please check them for viruses and defect.

---


[jira] [Created] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8340:
---

 Summary: SNAPSHOT build versions should compare as less than their 
eventual final release
 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


We recently added a utility function to compare two version strings, based on 
splitting on '.'s and comparing each component. However, it considers a version 
like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, since 
SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools

2012-05-01 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-8341:
---

 Summary: Fix or filter findbugs issues in hadoop-tools
 Key: HADOOP-8341
 URL: https://issues.apache.org/jira/browse/HADOOP-8341
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans


Now that the precommit build can test hadoop-tools we need to fix or filter the 
many findbugs warnings that are popping up in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: File Busy or not check. Kindly Help

2012-05-01 Thread Harsh J
Hey Ravi,

I just answered the same question not too long ago, which you can read
at: http://search-hadoop.com/m/o2DhNQPv7G1

On Tue, May 1, 2012 at 11:39 AM, Ravi Shankar ravishan...@hcl.com wrote:
 Hi All,

 I am trying to check that if a file is busy ,i.e. it is still being copied 
 into DFS from Local, the MR Job should not run if it is, otherwise it should. 
 What I have observed is that if the file is still being written to DFS, and I 
 call the MR job on it, the job runs (which may include operations to move the 
 file, and it is successful), after the completion of the MR Job, if the file 
 is missing, i.e it has been moved by the MR job, the file copying operation 
 throws an exception of file does not exist.

 I need the MR job to wait until the file is completely copied and then the MR 
 job should run on it.

 Kindly help on this.

 Thanks  Regards,
 Ravi Shankar,
 Cross Functional Services - Solution- Cloud
 HCL Technologies Ltd. - Infrastructure Services Division
 F 8,9 Sec - III, Noida, India - 201301. Cell - +91 995369


 


 ::DISCLAIMER::
 ---

 The contents of this e-mail and any attachment(s) are confidential and 
 intended for the named recipient(s) only.
 E-mail transmission cannot be guaranteed to be secure or error-free as 
 information could be intercepted, corrupted,
 lost, destroyed, arrive late or incomplete, or contain viruses.The e mail and 
 its contents (with or without referred
 errors) shall therefore not attach any liability on the originator or HCL or 
 its affiliates. Any views or opinions
 presented in this email are solely those of the author and may not 
 necessarily reflect the opinions of HCL or its
 affiliates. Any form of reproduction, dissemination, copying, disclosure, 
 Modification, distribution and/or publication
 of this message without the prior written consent of the author of this 
 e-mail is strictly prohibited. If you have
 received this email in error please delete it and notify the sender 
 immediately. Before opening any mail and attachments
 please check them for viruses and defect.

 ---



-- 
Harsh J


[jira] [Resolved] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans resolved HADOOP-8172.
-

   Resolution: Fixed
Fix Version/s: 3.0.0
   2.0.0

Thanks Anupam,

I put this into trunk and branch-2.  +1

 Configuration no longer sets all keys in a deprecated key list.
 ---

 Key: HADOOP-8172
 URL: https://issues.apache.org/jira/browse/HADOOP-8172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Robert Joseph Evans
Assignee: Anupam Seth
Priority: Critical
 Fix For: 2.0.0, 3.0.0

 Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch


 I did not look at the patch for HADOOP-8167 previously, but I did in response 
 to a recent test failure. The patch appears to have changed the following 
 code (I am just paraphrasing the code)
 {code}
 if(!deprecated(key)) {
   set(key, value);
 } else {
   for(String newKey: depricatedKeyMap.get(key)) {
 set(newKey, value);
   }
 }
 {code}
 to be 
 {code}
 set(key, value);
 if(depricatedKeyMap.contains(key)) {
set(deprecatedKeyMap.get(key)[0], value);
 } else if(reverseKeyMap.contains(key)) {
set(reverseKeyMap.get(key), value);
 }
 {code}
 If a key is deprecated and is mapped to more then one new key value only the 
 first one in the list will be set, where as previously all of them would be 
 set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HADOOP-8338.
---

   Resolution: Fixed
Fix Version/s: 1.1.0
   1.0.3
 Hadoop Flags: Reviewed

I committed this to branch-1.0 and branch-1. Trunk was already referencing 
HdfsConfiguration in DelegationTokenFetcher, so the problem won't happen.

 Can't renew or cancel HDFS delegation tokens over secure RPC
 

 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 1.0.3, 1.1.0

 Attachments: hadoop-8338.patch


 The fetchdt tool is failing for secure deployments when given --renew or 
 --cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
 renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira