Build failed in Jenkins: Hadoop-Common-trunk #539

2012-09-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/539/changes

Changes:

[suresh] HADOOP-8814. Replace string equals  by String#isEmpty(). Contributed 
by Brandon Li.

[eli] HDFS-3949. NameNodeRpcServer#join should join on both client and server 
RPC servers. Contributed by Eli Collins

[tucu] Reverting HADOOP-8805

[tucu] HDFS-3951. datanode web ui does not work over HTTPS when datanode is 
started in secure mode. (tucu)

--
[...truncated 26486 lines...]
[DEBUG]   (s) debug = false
[DEBUG]   (s) effort = Default
[DEBUG]   (s) failOnError = true
[DEBUG]   (s) findbugsXmlOutput = false
[DEBUG]   (s) findbugsXmlOutputDirectory = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
[DEBUG]   (s) fork = true
[DEBUG]   (s) includeTests = false
[DEBUG]   (s) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (s) maxHeap = 512
[DEBUG]   (s) nested = false
[DEBUG]   (s) outputDirectory = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/site
[DEBUG]   (s) outputEncoding = UTF-8
[DEBUG]   (s) pluginArtifacts = 
[org.codehaus.mojo:findbugs-maven-plugin:maven-plugin:2.3.2:, 
com.google.code.findbugs:bcel:jar:1.3.9:compile, 
org.codehaus.gmaven:gmaven-mojo:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-api:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-api:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-1.5:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-support:jar:1.3:compile, 
org.codehaus.groovy:groovy-all-minimal:jar:1.5.8:compile, 
org.apache.ant:ant:jar:1.7.1:compile, 
org.apache.ant:ant-launcher:jar:1.7.1:compile, jline:jline:jar:0.9.94:compile, 
org.codehaus.plexus:plexus-interpolation:jar:1.1:compile, 
org.codehaus.gmaven:gmaven-plugin:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-loader:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-support:jar:1.3:compile, 
org.sonatype.gshell:gshell-io:jar:2.0:compile, 
com.thoughtworks.qdox:qdox:jar:1.10:compile, 
org.apache.maven.shared:file-management:jar:1.2.1:compile, 
org.apache.maven.shared:maven-shared-io:jar:1.1:compile, 
commons-lang:commons-lang:jar:2.4:compile, 
org.slf4j:slf4j-api:jar:1.5.10:compile, 
org.sonatype.gossip:gossip:jar:1.2:compile, 
org.apache.maven.reporting:maven-reporting-impl:jar:2.1:compile, 
commons-validator:commons-validator:jar:1.2.0:compile, 
commons-beanutils:commons-beanutils:jar:1.7.0:compile, 
commons-digester:commons-digester:jar:1.6:compile, 
commons-logging:commons-logging:jar:1.0.4:compile, oro:oro:jar:2.0.8:compile, 
xml-apis:xml-apis:jar:1.0.b2:compile, 
org.codehaus.groovy:groovy-all:jar:1.7.4:compile, 
org.apache.maven.reporting:maven-reporting-api:jar:3.0:compile, 
org.apache.maven.doxia:doxia-core:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-logging-api:jar:1.1.3:compile, 
xerces:xercesImpl:jar:2.9.1:compile, 
commons-httpclient:commons-httpclient:jar:3.1:compile, 
commons-codec:commons-codec:jar:1.2:compile, 
org.apache.maven.doxia:doxia-sink-api:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-decoration-model:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-site-renderer:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-xhtml:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-fml:jar:1.1.3:compile, 
org.codehaus.plexus:plexus-i18n:jar:1.0-beta-7:compile, 
org.codehaus.plexus:plexus-velocity:jar:1.1.7:compile, 
org.apache.velocity:velocity:jar:1.5:compile, 
commons-collections:commons-collections:jar:3.2:compile, 
org.apache.maven.shared:maven-doxia-tools:jar:1.2.1:compile, 
commons-io:commons-io:jar:1.4:compile, 
com.google.code.findbugs:findbugs-ant:jar:1.3.9:compile, 
com.google.code.findbugs:findbugs:jar:1.3.9:compile, 
com.google.code.findbugs:jsr305:jar:1.3.9:compile, 
com.google.code.findbugs:jFormatString:jar:1.3.9:compile, 
com.google.code.findbugs:annotations:jar:1.3.9:compile, 
dom4j:dom4j:jar:1.6.1:compile, jaxen:jaxen:jar:1.1.1:compile, 
jdom:jdom:jar:1.0:compile, xom:xom:jar:1.0:compile, 
xerces:xmlParserAPIs:jar:2.6.2:compile, xalan:xalan:jar:2.6.0:compile, 
com.ibm.icu:icu4j:jar:2.6.1:compile, asm:asm:jar:3.1:compile, 
asm:asm-analysis:jar:3.1:compile, asm:asm-commons:jar:3.1:compile, 
asm:asm-util:jar:3.1:compile, asm:asm-tree:jar:3.1:compile, 
asm:asm-xml:jar:3.1:compile, jgoodies:plastic:jar:1.2.0:compile, 
org.codehaus.plexus:plexus-resources:jar:1.0-alpha-4:compile, 
org.codehaus.plexus:plexus-utils:jar:1.5.1:compile]
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG]   (s) relaxed = false
[DEBUG]   (s) remoteArtifactRepositories = [   id: apache.snapshots.https
  url: https://repository.apache.org/content/repositories/snapshots
   layout: default
snapshots: [enabled = true, 

[jira] [Created] (HADOOP-8830) org.apache.hadoop.security.authentication.server.AuthenticationFilter might be called twice, causing kerberos replay errors

2012-09-20 Thread Moritz Moeller (JIRA)
Moritz Moeller created HADOOP-8830:
--

 Summary: 
org.apache.hadoop.security.authentication.server.AuthenticationFilter might be 
called twice, causing kerberos replay errors
 Key: HADOOP-8830
 URL: https://issues.apache.org/jira/browse/HADOOP-8830
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Moritz Moeller


AuthenticationFilter.doFilter is called twice (not sure if that is intentional 
or not).

The second time it is called the ServletRequest is already authenticated, i.e. 
httpRequest.getRemoteUser() returns non-null info.

If the kerberos authentication is triggered a second time it'll return a replay 
attack exception.

I solved this by adding a if (httpRequest.getRemoteUser() == null) at the very 
beginning of doFilter.

Alternatively one can set an attribute on the request, or figure out why 
doFilter is called twice.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8831) FSEditLog preallocate() needs to reset the position of PREALLOCATE_BUFFER when more than 1MB size is needed

2012-09-20 Thread Jing Zhao (JIRA)
Jing Zhao created HADOOP-8831:
-

 Summary: FSEditLog preallocate() needs to reset the position of 
PREALLOCATE_BUFFER when more than 1MB size is needed
 Key: HADOOP-8831
 URL: https://issues.apache.org/jira/browse/HADOOP-8831
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Jing Zhao
Priority: Critical


In the new preallocate() function, when the required size is larger 1MB, we 
need to reset the position for PREALLOCATION_BUFFER every time when we have 
allocated 1MB. Otherwise seems only 1MB can be allocated even if need is larger 
than 1MB.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8832) backport HADOOP-5257 to branch-1

2012-09-20 Thread Brandon Li (JIRA)
Brandon Li created HADOOP-8832:
--

 Summary: backport HADOOP-5257 to branch-1
 Key: HADOOP-8832
 URL: https://issues.apache.org/jira/browse/HADOOP-8832
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Brandon Li
Assignee: Brandon Li


The original patch was only partially back ported to branch-1. This JIRA is to 
back port the rest of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8832) backport serviceplugin to branch-1

2012-09-20 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8832.
-

   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed

I committed the patch. Thank you Brandon for back porting the patch.

 backport serviceplugin to branch-1
 --

 Key: HADOOP-8832
 URL: https://issues.apache.org/jira/browse/HADOOP-8832
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 1.2.0

 Attachments: HADOOP-8832.branch-1.patch, HADOOP-8832.branch-1.patch, 
 HADOOP-8832.branch-1.patch.all


 The original patch was only partially back ported to branch-1. This JIRA is 
 to back port the rest of it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8833) fs -text should make sure to call inputstream.seek(0) before using input stream

2012-09-20 Thread Harsh J (JIRA)
Harsh J created HADOOP-8833:
---

 Summary: fs -text should make sure to call inputstream.seek(0) 
before using input stream
 Key: HADOOP-8833
 URL: https://issues.apache.org/jira/browse/HADOOP-8833
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Harsh J
Assignee: Harsh J


From Muddy Dixon on HADOOP-8449:

Hi
We found the changes in order of switch and guard block in
{code}
private InputStream forMagic(Path p, FileSystem srcFs) throws IOException
{code}
Because of this change, return value of
{code}
codec.createInputStream(i)
{code}
is changed if codec exists.

{code}
private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
FSDataInputStream i = srcFs.open(p);

// check codecs
CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
CompressionCodec codec = cf.getCodec(p);
if (codec != null) {
  return codec.createInputStream(i);
}

switch(i.readShort()) {
   // cases
}
{code}

New:

{code}
private InputStream forMagic(Path p, FileSystem srcFs) throws IOException {
FSDataInputStream i = srcFs.open(p);

switch(i.readShort()) { // === index (or pointer) processes!!
  // cases
  default: {
// Check the type of compression instead, depending on Codec class's
// own detection methods, based on the provided path.
CompressionCodecFactory cf = new CompressionCodecFactory(getConf());
CompressionCodec codec = cf.getCodec(p);
if (codec != null) {
  return codec.createInputStream(i);
}
break;
  }
}

// File is non-compressed, or not a file container we know.
i.seek(0);
return i;
  }
{code}

Fix is to use i.seek(0) before we use i anywhere. I missed that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira