Hadoop-Hdfs-trunk - Build # 2852 - Failure

2016-02-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2852/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5760 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:56 min]
[INFO] Apache Hadoop HDFS  FAILURE [  05:10 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.138 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 05:16 h
[INFO] Finished at: 2016-02-20T07:43:23+00:00
[INFO] Final Memory: 57M/718M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling

Error Message:
test timed out after 30 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at 
org.apache.hadoop.hdfs.DataStreamer.waitAndQueuePacket(DataStreamer.java:805)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacket(DFSOutputStream.java:423)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacketFull(DFSOutputStream.java:432)
at 
org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:418)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:418)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:376)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.createFile(TestDirectoryScanner.java:108)
at 
org.apache.hadoop.hd

Build failed in Jenkins: Hadoop-Hdfs-trunk #2852

2016-02-19 Thread Apache Jenkins Server
See 

Changes:

[wang] MAPREDUCE-6637. Testcase Failure :

[rkanter] MAPREDUCE-6613. Change mapreduce.jobhistory.jhist.format default from

--
[...truncated 5567 lines...]
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.799 sec - 
in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.204 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 186.574 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.372 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.787 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.272 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.051 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.166 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.385 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.681 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.61 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.449 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.055 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.726 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.16 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.831 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.685 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.965 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 201.899 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.086 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.22 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.876 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.761 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.215 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.521 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Err

Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #924

2016-02-19 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-9838) Refactor the excessReplicateMap to a class

2016-02-19 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-9838:
-

 Summary: Refactor the excessReplicateMap to a class
 Key: HDFS-9838
 URL: https://issues.apache.org/jira/browse/HDFS-9838
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h9838_20160219.patch

There are a lot of code duplication for accessing the excessReplicateMap in 
BlockManger.  Let's refactor the related code to a class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 923 - Failure

2016-02-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/923/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5933 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:05 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:27 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.056 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:31 h
[INFO] Finished at: 2016-02-20T03:10:55+00:00
[INFO] Final Memory: 56M/505M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1242 ms.

Stack Trace:
java.io.IOException: write timedout too late in 1242 ms.
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.OutputStream.write(OutputStream.java:75)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1040)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestStartup.testCompression

Error Message:
Problem binding to [localhost:37146] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [localhost:37146] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:414)
at sun.nio.ch.Net.bind(Net.java:406)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.S

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #923

2016-02-19 Thread Apache Jenkins Server
See 

Changes:

[junping_du] Support additional compression levels for GzipCodec. Contributed 
by Ravi

--
[...truncated 5740 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.662 sec - in 
org.apache.hadoop.cli.TestDeleteCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.365 sec - in 
org.apache.hadoop.cli.TestCacheAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.574 sec - in 
org.apache.hadoop.cli.TestAclCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.358 sec - in 
org.apache.hadoop.cli.TestHDFSCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.955 sec - in 
org.apache.hadoop.cli.TestCryptoAdminCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.886 sec - in 
org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.366 sec - in 
org.apache.hadoop.cli.TestErasureCodingCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.921 sec - in 
org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.325 sec - in 
org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.548 sec - in 
org.apache.hadoop.tools.TestHdfsConfigFields
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.822 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.038 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.009 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.287 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.864 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.055 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.2 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning:

Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #922

2016-02-19 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-9837) BlockManager#countNodes should be able to detect duplicated internal blocks

2016-02-19 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-9837:
---

 Summary: BlockManager#countNodes should be able to detect 
duplicated internal blocks
 Key: HDFS-9837
 URL: https://issues.apache.org/jira/browse/HDFS-9837
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao


Currently {{BlockManager#countNodes}} only counts the number of 
replicas/internal blocks thus it cannot detect the under-replicated scenario 
where a striped EC block has 9 internal blocks but contains duplicated 
data/parity blocks. E.g., b8 is missing while 2 b0 exist:
b0, b1, b2, b3, b4, b5, b6, b7, b0

If the NameNode keeps running, NN is able to detect the duplication of b0 and 
will put the block into the excess map. {{countNodes}} excludes internal blocks 
captured in the excess map thus can return the correct number of live replicas. 
However, if NN restarts before sending out the reconstruction command, the 
missing internal block cannot be detected anymore. The following steps can 
reproduce the issue:
# create an EC file
# kill DN1 and wait for the reconstruction to happen
# start DN1 again
# kill DN2 and restart NN immediately



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Hdfs-trunk #2850

2016-02-19 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-9836) RequestHedgingInvocationHandler can't be cast to org.apache.hadoop.ipc.RpcInvocationHandler

2016-02-19 Thread Guocui Mi (JIRA)
Guocui Mi created HDFS-9836:
---

 Summary: RequestHedgingInvocationHandler can't be cast to 
org.apache.hadoop.ipc.RpcInvocationHandler
 Key: HDFS-9836
 URL: https://issues.apache.org/jira/browse/HDFS-9836
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.8.0
Reporter: Guocui Mi


RequestHedgingInvocationHandler cannot be cast to 
org.apache.hadoop.ipc.RpcInvocationHandler

Reproduce steps:
1: Set client failover provider as RequestHedgingProxyProvider.

dfs.client.failover.proxy.provider.[nameservice]

org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider
  

2: run hdfs fsck / will get following exceptions.
C:\>hdfs fsck /
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/D:/data/hadoop.latest/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/D:/data/hadoop.latest/share/hadoop/yarn/hadoop-yarn-simulator-2.6.0-mt0.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.ClassCastException: 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler
 c
annot be cast to org.apache.hadoop.ipc.RpcInvocationHandler
at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:613)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.getConnectionId(RetryInvocationHandler.java:281)
at org.apache.hadoop.ipc.RPC.getConnectionIdForProxy(RPC.java:615)
at org.apache.hadoop.ipc.RPC.getServerAddress(RPC.java:598)
at org.apache.hadoop.hdfs.HAUtil.getAddressOfActive(HAUtil.java:380)
at 
org.apache.hadoop.hdfs.tools.DFSck.getCurrentNamenodeAddress(DFSck.java:248)
at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:255)
at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:148)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:145)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:144)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:360)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9835) OIV: add ReverseXML processor which reconstructs an fsimage from an XML file

2016-02-19 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-9835:
--

 Summary: OIV: add ReverseXML processor which reconstructs an 
fsimage from an XML file
 Key: HDFS-9835
 URL: https://issues.apache.org/jira/browse/HDFS-9835
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


OIV: add ReverseXML processor which reconstructs an fsimage from an XML file.  
This will make it easy to create fsimages for testing, and manually edit 
fsimages when there is corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Looking to a Hadoop 3 release

2016-02-19 Thread Yongjun Zhang
Thanks Andrew for initiating the effort!

+1 on pushing 3.x with extended alpha cycle, and continuing the more stable
2.x releases.

--Yongjun

On Thu, Feb 18, 2016 at 5:58 PM, Andrew Wang 
wrote:

> Hi Kai,
>
> Sure, I'm open to it. It's a new major release, so we're allowed to make
> these kinds of big changes. The idea behind the extended alpha cycle is
> that downstreams can give us feedback. This way if we do anything too
> radical, we can address it in the next alpha and have downstreams re-test.
>
> Best,
> Andrew
>
> On Thu, Feb 18, 2016 at 5:23 PM, Zheng, Kai  wrote:
>
> > Thanks Andrew for driving this. Wonder if it's a good chance for
> > HADOOP-12579 (Deprecate and remove WriteableRPCEngine) to be in. Note
> it's
> > not an incompatible change, but feel better to be done in the major
> release.
> >
> > Regards,
> > Kai
> >
> > -Original Message-
> > From: Andrew Wang [mailto:andrew.w...@cloudera.com]
> > Sent: Friday, February 19, 2016 7:04 AM
> > To: hdfs-dev@hadoop.apache.org; Kihwal Lee 
> > Cc: mapreduce-...@hadoop.apache.org; common-...@hadoop.apache.org;
> > yarn-...@hadoop.apache.org
> > Subject: Re: Looking to a Hadoop 3 release
> >
> > Hi Kihwal,
> >
> > I think there's still value in continuing the 2.x releases. 3.x comes
> with
> > the incompatible bump to a JDK8 runtime, and also the fact that 3.x won't
> > be beta or GA for some number of months. In the meanwhile, it'd be good
> to
> > keep putting out regular, stable 2.x releases.
> >
> > Best,
> > Andrew
> >
> >
> > On Thu, Feb 18, 2016 at 2:50 PM, Kihwal Lee  >
> > wrote:
> >
> > > Moving Hadoop 3 forward sounds fine. If EC is one of the main
> > > motivations, are we getting rid of branch-2.8?
> > >
> > > Kihwal
> > >
> > >   From: Andrew Wang 
> > >  To: "common-...@hadoop.apache.org" 
> > > Cc: "yarn-...@hadoop.apache.org" ; "
> > > mapreduce-...@hadoop.apache.org" ;
> > > hdfs-dev 
> > >  Sent: Thursday, February 18, 2016 4:35 PM
> > >  Subject: Re: Looking to a Hadoop 3 release
> > >
> > > Hi all,
> > >
> > > Reviving this thread. I've seen renewed interest in a trunk release
> > > since HDFS erasure coding has not yet made it to branch-2. Along with
> > > JDK8, the shell script rewrite, and many other improvements, I think
> > > it's time to revisit Hadoop 3.0 release plans.
> > >
> > > My overall plan is still the same as in my original email: a series of
> > > regular alpha releases leading up to beta and GA. Alpha releases make
> > > it easier for downstreams to integrate with our code, and making them
> > > regular means features can be included when they are ready.
> > >
> > > I know there are some incompatible changes waiting in the wings (i.e.
> > > HDFS-6984 making FileStatus a PB rather than Writable, some of
> > > HADOOP-9991 bumping dependency versions) that would be good to get in.
> > > If you have changes like this, please set the target version to 3.0.0
> > > and mark them "Incompatible". We can use this JIRA query to track:
> > >
> > >
> > > https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%2
> > > 0HDFS%2C%20YARN%2C%20MAPREDUCE)%20and%20%22Target%20Version%2Fs%22%20%
> > > 3D%20%223.0.0%22%20and%20resolution%3D%22unresolved%22%20and%20%22Hado
> > > op%20Flags%22%3D%22Incompatible%20change%22%20order%20by%20priority
> > >
> > > There's some release-related stuff that needs to be sorted out
> > > (namely, the new CHANGES.txt and release note generation from Yetus),
> > > but I'd tentatively like to roll the first alpha a month out, so third
> > > week of March.
> > >
> > > Best,
> > > Andrew
> > >
> > > On Mon, Mar 9, 2015 at 7:23 PM, Raymie Stata 
> > wrote:
> > >
> > > > Avoiding the use of JDK8 language features (and, presumably, APIs)
> > > > means you've abandoned #1, i.e., you haven't (really) bumped the JDK
> > > > source version to JDK8.
> > > >
> > > > Also, note that releasing from trunk is a way of achieving #3, it's
> > > > not a way of abandoning it.
> > > >
> > > >
> > > >
> > > > On Mon, Mar 9, 2015 at 7:10 PM, Andrew Wang
> > > > 
> > > > wrote:
> > > > > Hi Raymie,
> > > > >
> > > > > Konst proposed just releasing off of trunk rather than cutting a
> > > > branch-2,
> > > > > and there was general agreement there. So, consider #3 abandoned.
> > > > > 1&2
> > > can
> > > > > be achieved at the same time, we just need to avoid using JDK8
> > > > > language features in trunk so things can be backported.
> > > > >
> > > > > Best,
> > > > > Andrew
> > > > >
> > > > > On Mon, Mar 9, 2015 at 7:01 PM, Raymie Stata
> > > > > 
> > > > wrote:
> > > > >
> > > > >> In this (and the related threads), I see the following three
> > > > requirements:
> > > > >>
> > > > >> 1. "Bump the source JDK version to JDK8" (ie, drop JDK7 support).
> > > > >>
> > > > >> 2. "We'll still be releasing 2.x releases for a while, with
> > > > >> similar feature sets as 3.x."
> > > > >>
> > > > >> 3. Avoid the "risk of split-brain behavior" by "minimize
> > > > >> backporting headaches. Pulling t

[jira] [Created] (HDFS-9834) OzoneHandler : Enable MiniDFSCluster based testing for Ozone

2016-02-19 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-9834:
--

 Summary: OzoneHandler : Enable MiniDFSCluster based testing for 
Ozone
 Key: HDFS-9834
 URL: https://issues.apache.org/jira/browse/HDFS-9834
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anu Engineer
Assignee: Anu Engineer


This patch uses the a local directory to store Ozone objects and allows 
MiniDFScluster based testing for Ozone's REST protocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Looking to a Hadoop 3 release

2016-02-19 Thread Ravi Prakash
+1 for the plan to start cutting 3.x alpha releases. Thanks for the
initiative Andrew!

On Fri, Feb 19, 2016 at 6:19 AM, Steve Loughran 
wrote:

>
> > On 19 Feb 2016, at 11:27, Dmitry Sivachenko  wrote:
> >
> >
> >> On 19 Feb 2016, at 01:35, Andrew Wang  wrote:
> >>
> >> Hi all,
> >>
> >> Reviving this thread. I've seen renewed interest in a trunk release
> since
> >> HDFS erasure coding has not yet made it to branch-2. Along with JDK8,
> the
> >> shell script rewrite, and many other improvements, I think it's time to
> >> revisit Hadoop 3.0 release plans.
> >>
> >
>
> It's time to start ... I suspect it'll take a while to stabilise. I look
> forward to the new shell scripts already
>
> One thing I do want there is for all the alpha releases to make clear that
> there are no compatibility policies here; protocols may change and there is
> no requirement of the first 3.x release to be compatible with all the 3.0.x
> alphas. That's something we missed out on the 2.0.x-alpha process, or at
> least not repeated often enough.
>
> >
> > Hello,
> >
> > any chance IPv6 support (HADOOP-11890) will be finished before 3.0 comes
> out?
> >
> > Thanks!
> >
> >
>
> sounds like a good time for a status update on the FB work —and anything
> people can do to test it would be appreciated by all. That includes testing
> on ipv4 systems, and especially, IPv4/v6 systems with Kerberos turned on
> and both MIT and AD kerberos servers. At the same time, IPv6 support ought
> to be something that could be added in.
>
>
> I don't have any opinions on timescale, but
>
> +1 to anything related to classpath isolation
> +1 to a careful bump of versions of dependencies.
> +1 to fixing the outstanding Java 8 migration issues, especially the big
> Jersey patch that's just been updated.
> +1 to switching to JIRA-created release notes
>
> Having been doing the slider releases recently, it's clear to me that you
> can do a lot in automating the release process itself. All those steps in
> the release runbook can be turned into targets in a special ant release.xml
> build file, calling maven, gpg, etc.
>
> I think doing something like this for 3.0 will significantly benefit both
> the release phase here but the future releases
>
> This is the slider one:
> https://github.com/apache/incubator-slider/blob/develop/bin/release.xml
>
> It doesn't replace maven, instead it choreographs that along with all the
> other steps: signing and checksumming artifacts, publishing them, voting
>
> it includes
>  -refusing to release if the git repo is modified
>  -making the various git branch/tag/push operations
>  -issuing the various mvn versions:update commands
>  -signing
>  -publishing via asf SVN
>  -using GET calls too verify the artifacts made it
>  -generating the vote and vote result emails (it even counts the votes)
>
> I recommend this is included as part of the release process. It does make
> a difference; we can now cut new releases with no human intervention other
> than editing a properties file and running different targets as the process
> goes through its release and vote phases.
>
> -Steve


Re: Looking to a Hadoop 3 release

2016-02-19 Thread Steve Loughran

> On 19 Feb 2016, at 11:27, Dmitry Sivachenko  wrote:
> 
> 
>> On 19 Feb 2016, at 01:35, Andrew Wang  wrote:
>> 
>> Hi all,
>> 
>> Reviving this thread. I've seen renewed interest in a trunk release since
>> HDFS erasure coding has not yet made it to branch-2. Along with JDK8, the
>> shell script rewrite, and many other improvements, I think it's time to
>> revisit Hadoop 3.0 release plans.
>> 
> 

It's time to start ... I suspect it'll take a while to stabilise. I look 
forward to the new shell scripts already

One thing I do want there is for all the alpha releases to make clear that 
there are no compatibility policies here; protocols may change and there is no 
requirement of the first 3.x release to be compatible with all the 3.0.x 
alphas. That's something we missed out on the 2.0.x-alpha process, or at least 
not repeated often enough.

> 
> Hello,
> 
> any chance IPv6 support (HADOOP-11890) will be finished before 3.0 comes out?
> 
> Thanks!
> 
> 

sounds like a good time for a status update on the FB work —and anything people 
can do to test it would be appreciated by all. That includes testing on ipv4 
systems, and especially, IPv4/v6 systems with Kerberos turned on and both MIT 
and AD kerberos servers. At the same time, IPv6 support ought to be something 
that could be added in.


I don't have any opinions on timescale, but

+1 to anything related to classpath isolation
+1 to a careful bump of versions of dependencies.
+1 to fixing the outstanding Java 8 migration issues, especially the big Jersey 
patch that's just been updated.
+1 to switching to JIRA-created release notes

Having been doing the slider releases recently, it's clear to me that you can 
do a lot in automating the release process itself. All those steps in the 
release runbook can be turned into targets in a special ant release.xml build 
file, calling maven, gpg, etc.

I think doing something like this for 3.0 will significantly benefit both the 
release phase here but the future releases

This is the slider one: 
https://github.com/apache/incubator-slider/blob/develop/bin/release.xml

It doesn't replace maven, instead it choreographs that along with all the other 
steps: signing and checksumming artifacts, publishing them, voting

it includes
 -refusing to release if the git repo is modified
 -making the various git branch/tag/push operations
 -issuing the various mvn versions:update commands
 -signing
 -publishing via asf SVN 
 -using GET calls too verify the artifacts made it
 -generating the vote and vote result emails (it even counts the votes)

I recommend this is included as part of the release process. It does make a 
difference; we can now cut new releases with no human intervention other than 
editing a properties file and running different targets as the process goes 
through its release and vote phases.

-Steve