[jira] [Created] (HDFS-7734) Class cast exception in NameNode#main

2015-02-03 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-7734:
---

 Summary: Class cast exception in NameNode#main
 Key: HDFS-7734
 URL: https://issues.apache.org/jira/browse/HDFS-7734
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Priority: Blocker


NameNode hits the following exception immediately on startup.

{code}
15/02/03 15:50:25 ERROR namenode.NameNode: Failed to start namenode.
java.lang.ClassCastException: org.apache.log4j.Logger cannot be cast to 
org.apache.commons.logging.Log
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1557)
15/02/03 15:50:25 INFO util.ExitUtil: Exiting with status 1
{code}

Location of the exception in NameNode.java:
{code}
  public static void main(String argv[]) throws Exception {
if (DFSUtil.parseHelpArgument(argv, NameNode.USAGE, System.out, true)) {
  System.exit(0);
}

try {
  StringUtils.startupShutdownMessage(NameNode.class, argv,
  (org.apache.commons.logging.Log) 
LogManager.getLogger(LOG.getName()));   < Failed here.
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7733) NFS: readdir/readdirplus return null directory attribute on failure

2015-02-03 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-7733:
---

 Summary: NFS: readdir/readdirplus return null directory attribute 
on failure
 Key: HDFS-7733
 URL: https://issues.apache.org/jira/browse/HDFS-7733
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Arpit Agarwal


NFS readdir and readdirplus operations return a null directory attribute on 
some failure path. This causes clients to get a 'Stale file handle' error which 
can only be fixed by unmounting and remounting the share.

The issue can be reproduced by running 'ls' against a large directory which is 
being actively modified, triggering the 'cookie mismatch' failure path.

{code}
} else {
  LOG.error("cookieverf mismatch. request cookieverf: " + cookieVerf
  + " dir cookieverf: " + dirStatus.getModificationTime());
  return new READDIRPLUS3Response(Nfs3Status.NFS3ERR_BAD_COOKIE);
}
{code}

Thanks to [~brandonli] for catching the issue.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7732) Fix the order of the parameters in DFSConfigKeys

2015-02-03 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-7732:
---

 Summary: Fix the order of the parameters in DFSConfigKeys
 Key: HDFS-7732
 URL: https://issues.apache.org/jira/browse/HDFS-7732
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
Priority: Trivial


In DFSConfigKeys.java, there are some parameters between 
{{DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY}} and 
{{DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT}}.
{code}
  public static final String DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY = 
"dfs.client.read.shortcircuit.buffer.size";
  public static final String 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_KEY = 
"dfs.client.read.shortcircuit.streams.cache.size";
  public static final int 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_DEFAULT = 256;
  public static final String 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_KEY = 
"dfs.client.read.shortcircuit.streams.cache.expiry.ms";
  public static final long 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_DEFAULT = 5 * 60 * 1000;
  public static final int DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT = 
1024 * 1024;
{code}
The order should be corrected as 
{code}
  public static final String DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_KEY = 
"dfs.client.read.shortcircuit.buffer.size";
  public static final int DFS_CLIENT_READ_SHORTCIRCUIT_BUFFER_SIZE_DEFAULT = 
1024 * 1024;
  public static final String 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_KEY = 
"dfs.client.read.shortcircuit.streams.cache.size";
  public static final int 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_SIZE_DEFAULT = 256;
  public static final String 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_KEY = 
"dfs.client.read.shortcircuit.streams.cache.expiry.ms";
  public static final long 
DFS_CLIENT_READ_SHORTCIRCUIT_STREAMS_CACHE_EXPIRY_MS_DEFAULT = 5 * 60 * 1000;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 90 - Still Failing

2015-02-03 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/90/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11393 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:44 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.081 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:44 h
[INFO] Finished at: 2015-02-03T14:18:44+00:00
[INFO] Final Memory: 59M/248M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1:checkstyle 
(default-cli) on project hadoop-hdfs: An error has occurred in Checkstyle 
report generation. Failed during checkstyle configuration: cannot initialize 
module TreeWalker - Unable to instantiate DoubleCheckedLocking: Unable to 
instantiate DoubleCheckedLockingCheck -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #82
Archived 1 artifacts
Archive block size is 32768
Received 63 blocks and 96551804 bytes
Compression is 2.1%
Took 25 sec
Recording test results
Updating HDFS-5631
Updating MAPREDUCE-6143
Updating HDFS-7681
Updating HADOOP-11442
Updating HDFS-6651
Updating HADOOP-10181
Updating YARN-3113
Updating HADOOP-11494
Updating YARN-2808
Updating HDFS-5782
Updating YARN-2216
Updating HDFS-7696
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #90

2015-02-03 Thread Apache Jenkins Server
See 

Changes:

[benoy] HADOOP-11494. Lock acquisition on WrappedInputStream#unwrappedRpcBuffer 
may race with another thread. Contributed by Ted Yu.

[kihwal] YARN-3113. Release audit warning for Sorting icons.psd. Contributed by 
Steve Loughran.

[cnauroth] HADOOP-10181. GangliaContext does not work with multicast ganglia 
setup. Contributed by Andrew Johnson.

[cnauroth] HADOOP-11442. hadoop-azure: Create test jar. Contributed by Shashank 
Khandelwal.

[zjshen] YARN-2808. Made YARN CLI list attempt’s finished containers of a 
running application. Contributed by Naganarasimha G R.

[zjshen] YARN-2216. Fixed the change log.

[szetszwo] Move HDFS-5631, HDFS-5782 and HDFS-7681 to branch-2.

[szetszwo] HDFS-7696. In FsDatasetImpl, the getBlockInputStream(..) and 
getTmpInputStreams(..) methods may leak file descriptors.

[rkanter] MAPREDUCE-6143. add configuration for mapreduce speculative execution 
in MR2 (zxu via rkanter)

[wheat9] HDFS-6651. Deletion failure can leak inodes permanently. Contributed 
by Jing Zhao.

--
[...truncated 11200 lines...]
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[

Hadoop-Hdfs-trunk - Build # 2025 - Still Failing

2015-02-03 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2025/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10894 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:36 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.124 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:36 h
[INFO] Finished at: 2015-02-03T14:11:02+00:00
[INFO] Final Memory: 61M/745M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1:checkstyle 
(default-cli) on project hadoop-hdfs: An error has occurred in Checkstyle 
report generation. Failed during checkstyle configuration: cannot initialize 
module TreeWalker - Unable to instantiate DoubleCheckedLocking: Unable to 
instantiate DoubleCheckedLockingCheck -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2020
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 27113034 bytes
Compression is 0.0%
Took 7.8 sec
Recording test results
Updating HDFS-5631
Updating MAPREDUCE-6143
Updating HDFS-7681
Updating HADOOP-11442
Updating HDFS-6651
Updating HADOOP-10181
Updating YARN-3113
Updating HADOOP-11494
Updating YARN-2808
Updating HDFS-5782
Updating YARN-2216
Updating HDFS-7696
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #2025

2015-02-03 Thread Apache Jenkins Server
See 

Changes:

[benoy] HADOOP-11494. Lock acquisition on WrappedInputStream#unwrappedRpcBuffer 
may race with another thread. Contributed by Ted Yu.

[kihwal] YARN-3113. Release audit warning for Sorting icons.psd. Contributed by 
Steve Loughran.

[cnauroth] HADOOP-10181. GangliaContext does not work with multicast ganglia 
setup. Contributed by Andrew Johnson.

[cnauroth] HADOOP-11442. hadoop-azure: Create test jar. Contributed by Shashank 
Khandelwal.

[zjshen] YARN-2808. Made YARN CLI list attempt’s finished containers of a 
running application. Contributed by Naganarasimha G R.

[zjshen] YARN-2216. Fixed the change log.

[szetszwo] Move HDFS-5631, HDFS-5782 and HDFS-7681 to branch-2.

[szetszwo] HDFS-7696. In FsDatasetImpl, the getBlockInputStream(..) and 
getTmpInputStreams(..) methods may leak file descriptors.

[rkanter] MAPREDUCE-6143. add configuration for mapreduce speculative execution 
in MR2 (zxu via rkanter)

[wheat9] HDFS-6651. Deletion failure can leak inodes permanently. Contributed 
by Jing Zhao.

--
[...truncated 10701 lines...]
  [javadoc] rotocol/RemoteEditLog$1.class]]
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[

[jira] [Created] (HDFS-7731) Can not start HA namenode with security enabled

2015-02-03 Thread donhoff_h (JIRA)
donhoff_h created HDFS-7731:
---

 Summary: Can not start HA namenode with security enabled
 Key: HDFS-7731
 URL: https://issues.apache.org/jira/browse/HDFS-7731
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ha, journal-node, namenode, security
Affects Versions: 2.5.2
 Environment: Redhat6.2 Hadoop2.5.2
Reporter: donhoff_h


I am converting a secure non-HA cluster into a secure HA cluster. After the 
configuration and started all the journalnodes, I executed the following 
commands on the original NameNode:
1. hdfs name -initializeSharedEdits   #this step succeeded
2. hadoop-daemon.sh start namenode  # this step failed.

So the namenode can not be started. I verified that my principals are right. 
And if I change back to the secure non-HA mode, the namenode can be started.

The namenode log just reported the following errors and I could not find the 
reason according to this log:

2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
Start loading edits file 
http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
 
http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
2015-02-03 17:42:06,024 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
 
http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
 to transaction ID 68994
2015-02-03 17:42:06,024 INFO 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
stream 
'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
 to transaction ID 68994
2015-02-03 17:42:06,154 ERROR 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
initializing 
http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrb&segmentTxId=68994&storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Server not found 
in Kerberos database (7) - UNKNOWN_SERVER)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at 
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
at 
org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:184)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:816)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:676)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.rec