Hadoop-Hdfs-trunk-Java8 - Build # 826 - Still Failing

2016-01-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/826/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6007 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:54 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:55 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.057 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:59 h
[INFO] Finished at: 2016-01-23T09:17:32+00:00
[INFO] Final Memory: 56M/438M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten

Error Message:
Timed out waiting for /test to reach 3 replicas

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for /test to reach 3 
replicas
at 
org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:768)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWrittenForDatanode(TestDataNodeHotSwapVolumes.java:644)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten(TestDataNodeHotSwapVolumes.java:569)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateWithDataNodesRestartImmediately

Error Message:
inode should complete in ~3 ms.
Expected: is 
 but: was 

Stack Trace:
java.lang.AssertionError: inode should complete in ~3 ms.
Expected: is 
 but: was 
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at 
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1196)
at 
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.checkBlockRecovery(TestFileTruncate.java:1180)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #826

2016-01-23 Thread Apache Jenkins Server
See 

Changes:

[xgong] YARN-4496. Improve HA ResourceManager Failover detection on the client.

[rohithsharmaks] YARN-4614. Fix random failure in

--
[...truncated 5814 lines...]
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.242 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.249 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.348 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.576 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.078 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.869 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.824 sec - in 
org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.432 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.948 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.223 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.045 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 127.975 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.557 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.158 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.765 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.264 sec - in 

[jira] [Created] (HDFS-9691) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-01-23 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9691:
---

 Summary: 
o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode 
fails intermittently
 Key: HDFS-9691
 URL: https://issues.apache.org/jira/browse/HDFS-9691
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


It's a flaky test method and can rarely re-produce locally. We can see this 
happened in recent build, e.g. 
* 
https://builds.apache.org/job/PreCommit-HDFS-Build/14225/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManagerSafeMode/testCheckSafeMode/
* 
https://builds.apache.org/job/PreCommit-HDFS-Build/14139/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManagerSafeMode/testCheckSafeMode/

{code}
Error Message

expected: but was:
Stacktrace

java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:165)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9690) addBlock is not idempotent

2016-01-23 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-9690:
-

 Summary: addBlock is not idempotent
 Key: HDFS-9690
 URL: https://issues.apache.org/jira/browse/HDFS-9690
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


TestDFSClientRetries#testIdempotentAllocateBlockAndClose can illustrate the 
bug. It failed in the following builds.
- 
https://builds.apache.org/job/PreCommit-HDFS-Build/14188/testReport/org.apache.hadoop.hdfs/TestDFSClientRetries/testIdempotentAllocateBlockAndClose/
- 
https://builds.apache.org/job/PreCommit-HDFS-Build/14201/testReport/org.apache.hadoop.hdfs/TestDFSClientRetries/testIdempotentAllocateBlockAndClose/
- 
https://builds.apache.org/job/PreCommit-HDFS-Build/14202/testReport/org.apache.hadoop.hdfs/TestDFSClientRetries/testIdempotentAllocateBlockAndClose/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9692) Report top 10 Namenode rpc consumers through JMX

2016-01-23 Thread Nikhil Mulley (JIRA)
Nikhil Mulley created HDFS-9692:
---

 Summary: Report top 10 Namenode rpc consumers through JMX
 Key: HDFS-9692
 URL: https://issues.apache.org/jira/browse/HDFS-9692
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: Nikhil Mulley


Hi,

I think it would really help if namenode(s) through metrics/jmx report the top 
rpc consumers, so it will be really handy to look at the rogue clients in times 
of despair/troubleshooting times.
At times of rpc spikes on namenode, and callqueuelength increasing, it becomes 
tedious to figure out the offenders when there is a huge cluster (>1k nodes).
Having rpc client information(src_host:src_port)  in the top consumers list 
would help operators. Let me know if any other information is needed to make 
this feature added to namenode metrics system.

thank you

Nikhil




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: Hadoop encryption module as Apache Chimera incubator project

2016-01-23 Thread Chen, Haifeng
> [Kai] So far I saw it's mainly about AES-256. I suggest the scope can be 
> expanded a little bit, perhaps a dedicated high performance encryption 
> library, then we would have quite much to contribute to it, like other 
> ciphers, MACs, PRNGs and so on. Then both Hadoop and Spark can benefit from 
> it.

> [UMA] Yes, once development started as separate project then its free to 
> evolve and provide more improvements to support more customer/user space for 
> encryption based on demand.
Haifeng, would you add some points here?

Currently Chimera is providing optimization path (AES-NI) to AES CBC/CTR (128, 
192 or 265) at the Cipher level. And provide the stream level API for AES/CTR 
(128, 192, 256). Additionally and separately, it provides utilities optimized 
SecureRandom through DRNG (hardware based true random number generator) for 
enhance security of key generation.

Just as Uma point out, it will involve more improvements and features in this 
space. For example, more AES modes support in the Cipher layer API and stream 
API support beyond CTR mode. While for other ciphers, or MACS, if our layer in 
Chimera adds value, sure we can consider in the future roadmap.

Thanks,
Haifeng



-Original Message-
From: Zheng, Kai [mailto:kai.zh...@intel.com] 
Sent: Friday, January 22, 2016 9:11 AM
To: hdfs-dev@hadoop.apache.org
Subject: RE: Hadoop encryption module as Apache Chimera incubator project

Thanks Chris for the pointer and Uma for the confirm!

I'm happy to know HADOOP-11127 and there are already so many solid discussions 
in it. I will go through it, make my investigation and see how I can help in 
the effort.

Sure let's go back to Chimera and sorry fo the interrupt.

Regards,
Kai

-Original Message-
From: Gangumalla, Uma [mailto:uma.ganguma...@intel.com]
Sent: Friday, January 22, 2016 8:38 AM
To: hdfs-dev@hadoop.apache.org
Subject: Re: Hadoop encryption module as Apache Chimera incubator project

>Uma and everyone, thank you for the proposal.  +1 to proceed.
Thanks Chris for your feedback.

Kai Wrote:
I believe Haifeng had mentioned the problem in a call when discussing erasure 
coding work, but until now I got to understand what's the problem and how 
Chimera or Snappy Java solved it. It looks like there can be some thin clients 
that don't rely on Hadoop installation so no libhadoop.so is available to use 
on the client host. The approach mentioned here is to bundle the library file 
(*.so) into a jar and dynamically extract the file when loading it. When no 
library file is contained in the jar then it goes to the normal case, loading 
it from an installation. It's smart and nice! My question is, could we consider 
to adopt the approach for libhadoop.so library? It might be worth to discuss 
because, we're bundling more and more things into the library (recently we just 
put Intel ISA-L support into it), and such things may be desired for such 
clients. It may also be helpful for development, because sometimes when run 
unit tests that involve native codes, some error may happen and complain no 
place to find libhadoop.so. Thanks.
[UMA] Good points Kai. It is good to think and invest some efforts to solve 
libhadoop.so part.
 As Chris suggested taking this discussion into that JIRA HADOOP-11127 is more 
appropriate thing to do.


Regards,
Uma


On 1/21/16, 12:18 PM, "Chris Nauroth"  wrote:

>> My question is, could we consider to adopt the approach for 
>>libhadoop.so library?
>
>
>This is something that I have proposed already in HADOOP-11127.  There 
>is not consensus on proceeding with it from the contributors in that 
>discussion.  There are some big challenges around how it would impact 
>the release process.  I also have not had availability to prototype an 
>implementation to make a stronger case for feasibility.  Kai, if this 
>is something that you're interested in, then I encourage you to join 
>the discussion in HADOOP-11127 or even pick up prototyping work if you'd like.
> Since we have that existing JIRA, let's keep this mail thread focused 
>just on Chimera.  Thank you!
>
>Uma and everyone, thank you for the proposal.  +1 to proceed.
>
>--Chris Nauroth
>
>
>
>
>On 1/20/16, 11:16 PM, "Zheng, Kai"  wrote:
>
>>Thanks Uma. 
>>
>>I have a question by the way, it's not about Chimera project, but 
>>about the mentioned advantage 1 and libhadoop.so installation problem.
>>I copied the saying as below for convenience.
>>
1. As Chimera embedded the native in jar (similar to Snappy java), 
it solves the current issues in Hadoop that a HDFS client has to 
depend libhadoop.so if the client needs to read encryption zone in 
HDFS. This means a HDFS client may has to depend a Hadoop 
installation in local machine. For example, HBase uses depends on 
HDFS client jar other than a Hadoop installation and then has no 
access to libhadoop.so. So HBase cannot use an encryption zone or it cause 
error.
>>