Hadoop-Hdfs-trunk-Java8 - Build # 1311 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1311/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5480 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client .. SUCCESS [04:04 min]
[INFO] Apache Hadoop HDFS . FAILURE [  01:08 h]
[INFO] Apache Hadoop HDFS Native Client ... SKIPPED
[INFO] Apache Hadoop HttpFS ... SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
[INFO] Apache Hadoop HDFS-NFS . SKIPPED
[INFO] Apache Hadoop HDFS Project . SUCCESS [  0.111 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:12 h
[INFO] Finished at: 2016-06-07T05:57:49+00:00
[INFO] Final Memory: 95M/3892M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/source/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover

Error Message:
expected:<10> but was:<9>

Stack Trace:
java.lang.AssertionError: expected:<10> but was:<9>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.TestAsyncHDFSWithHA.testAsyncWithHAFailover(TestAsyncHDFSWithHA.java:163)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testAutoFormatEmptyDirectory

Error Message:
Problem binding to [localhost:49056] java.net.BindException: Address already in 
use; For more details see:  

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1311

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[Arun Suresh] YARN-5185. StageAllocaterGreedyRLE: Fix NPE in corner case. (Carlo

[Arun Suresh] YARN-4525. Fix bug in 
RLESparseResourceAllocation.getRangeOverlapping().

--
[...truncated 5283 lines...]
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.131 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.507 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.586 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestDataTransferProtocol
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.543 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.325 sec - in 
org.apache.hadoop.hdfs.TestDataTransferProtocol
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.716 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.977 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.926 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 183.175 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.804 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.153 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.375 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.406 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.25 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.567 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.util.TestStripedBlockUtil
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.989 sec - in 
org.apache.hadoop.hdfs.util.TestStripedBlockUtil
Running org.apache.hadoop.hdfs.util.TestCombinedHostsFileReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.333 sec - in 
org.apache.hadoop.hdfs.util.TestCombinedHostsFileReader
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.072 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.509 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.385 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.949 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.223 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 1, Failures: 0, Errors: 

Cannot create release artifacts for branch-2.8

2016-06-06 Thread Wangda Tan
Hi Hadoop Devs,

As you know, we're pushing 2.8.0 releases recently, there're couple of
issues that block creating release artifacts from source code.

I tried following approaches:
1) Run build through Hadoop Jenkins Job:
https://builds.apache.org/job/HADOOP2_Release_Artifacts_Builder/
2) Run dev-support/create-release.sh

There're at most two issues cause the problem from what I can see:
1) https://issues.apache.org/jira/browse/HADOOP-12022 removed
releasenotes.html
2) https://issues.apache.org/jira/browse/HADOOP-11792 removed all
CHANGES.txt

I have tried to revert HADOOP-12022/HADOOP-11792 locally in branch-2.8,
create-releases.sh can run through and generate docs/artifacts correctly
(at least layout looks correctly, I haven't verified generated bits).

To make sure releases are not blocked, we have a couple of options:
a. Fix HADOOP-12892 and related issues, which requires to backport a couple
of commits which marked to be incompatible.
b. Revert both of the commits, and manual fix CHANGES.txt.

Any helps/suggestions are welcome.

Thanks,
Wangda


[jira] [Created] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-06 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10493:
--

 Summary: Add links to datanode web UI in namenode datanodes page
 Key: HDFS-10493
 URL: https://issues.apache.org/jira/browse/HDFS-10493
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, ui
Reporter: Weiwei Yang


HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
links from namenode datanodes information page to individual datanode UI, to 
check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Hadoop-Hdfs-trunk-Java8 - Build # 1310 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1310/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5458 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client .. SUCCESS [04:12 min]
[INFO] Apache Hadoop HDFS . FAILURE [  01:10 h]
[INFO] Apache Hadoop HDFS Native Client ... SKIPPED
[INFO] Apache Hadoop HttpFS ... SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
[INFO] Apache Hadoop HDFS-NFS . SKIPPED
[INFO] Apache Hadoop HDFS Project . SUCCESS [  0.091 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:15 h
[INFO] Finished at: 2016-06-07T01:01:00+00:00
[INFO] Final Memory: 95M/3885M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/source/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture

Error Message:
expected:<17> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<17> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency.testGenerationStampInFuture(TestNameNodeMetadataConsistency.java:113)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.testCheckpointStartingMidEditsFile[0]

Error Message:
Expected non-empty 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1310

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[zhz] HDFS-10458. getFileEncryptionInfo should return quickly for

--
[...truncated 5261 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.552 sec - in 
org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.717 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.962 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.769 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.701 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.568 sec - in 
org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.337 sec - in 
org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.401 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.477 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.227 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.407 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 157.016 sec - 
in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.668 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.843 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.084 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.608 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.346 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.341 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.734 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.784 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestLeaseRecoveryStriped
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.124 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.743 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.061 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecoveryStriped
Running org.apache.hadoop.hdfs.client.impl.TestBlockReaderRemote2
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.328 sec - in 
org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy
Running org.apache.hadoop.hdfs.client.impl.TestClientBlockVerification
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.876 sec - in 
org.apache.hadoop.hdfs.client.impl.TestBlockReaderRemote2
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.023 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.client.impl.TestBlockReaderRemote
Running 

Hadoop-Hdfs-trunk-Java8 - Build # 1309 - Still Failing

2016-06-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1309/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5967 lines...]
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client .. SUCCESS [04:05 min]
[INFO] Apache Hadoop HDFS . FAILURE [58:53 min]
[INFO] Apache Hadoop HDFS Native Client ... SKIPPED
[INFO] Apache Hadoop HttpFS ... SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
[INFO] Apache Hadoop HDFS-NFS . SKIPPED
[INFO] Apache Hadoop HDFS Project . SUCCESS [  0.221 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:03 h
[INFO] Finished at: 2016-06-06T22:48:52+00:00
[INFO] Final Memory: 95M/3646M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/source/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Recording test results
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3
Setting 
LATEST1_8_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
Setting 
MAVEN_3_3_3_HOME=/home/jenkins/jenkins-slave/tools/hudson.tasks.Maven_MavenInstallation/maven-3.3.3



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty

Error Message:
Port is already in use; giving up after 10 times.

Stack Trace:
java.io.IOException: Port is already in use; giving up after 10 times.
at 
org.apache.hadoop.net.ServerSocketUtil.waitForPort(ServerSocketUtil.java:98)
at 
org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:571)


FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancing2OutOf3Blockpools

Error Message:
Creating block, no free space available

Stack Trace:
java.io.IOException: Creating block, no free space available
at 
org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset$BInfo.(SimulatedFSDataset.java:147)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1309

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[mingma] MAPREDUCE-5044. Have AM trigger jstack on task attempts that timeout

[stevel] HADOOP-12807 S3AFileSystem should read AWS credentials from environment

--
[...truncated 5770 lines...]
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR: Could not install MAVEN_3_3_3_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:947)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:390)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:577)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1453)
at hudson.model.AbstractProject.poll(AbstractProject.java:1356)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:526)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:555)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.944 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.081 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 10.601 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.384 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.846 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.098 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.906 sec - in 
org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.408 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.266 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.5 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.412 sec - in 
org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, 

Re: Why there are so many revert operations on trunk?

2016-06-06 Thread larry mccay
inline


On Mon, Jun 6, 2016 at 4:36 PM, Vinod Kumar Vavilapalli 
wrote:

> Folks,
>
> It is truly disappointing how we are escalating situations that can be
> resolved through basic communication.
>
> Things that shouldn’t have happened
> - After a few objections were raised, commits should have simply stopped
> before restarting again but only after consensus
> - Reverts (or revert and move to a feature-branch) shouldn’t have been
> unequivocally done without dropping a note / informing everyone / building
> consensus. And no, not even a release-manager gets this free pass. Not on
> branch-2, not on trunk, not anywhere.
> - Freaking out on -1’s and reverts - we as a community need to be less
> stigmatic about -1s / reverts.
>
>
Agreed.


> Trunk releases:
> This is the other important bit about huge difference of
> expectations between the two sides w.r.t trunk and branching. Till now,
> we’ve never made releases out of trunk. So in-progress features that people
> deemed to not need a feature branch could go into trunk without much
> trouble. Given that we are now making releases off trunk, I can see (a) the
> RM saying "no, don’t put in-progress stuff and (b) the contributors saying
> “no we don’t want the overhead of a branch”. I’ve raised related topics
> (but only focusing on incompatible changes) before -
> http://markmail.org/message/m6x73t6srlchywsn <
> http://markmail.org/message/m6x73t6srlchywsn> - but we never decided
> anything.
>
> We need to at the least force a reset of expectations w.r.t how trunk and
> small / medium / incompatible changes there are treated. We should hold off
> making a release off trunk before this gets fully discussed in the
> community and we all reach a consensus.
>

+1

In essence, by moving commits to a feature branch so that we can release
from trunk is creating a "trunk-branch". :)


> > * Without a user API, there's no way for people to use it, so not much
> > advantage to having it in a release
> >
> > Since the code is separate and probably won't break any existing code, I
> > won't -1 if you want to include this in a release without a user API, but
> > again, I question the utility of including code that can't be used.
>
> Clearly, there are two sides to this argument. One side claims the absence
> of user-facing public / stable APIs, and that for all purposes this is
> dead-code for everyone other than the few early adopters who want to
> experiment with it. The other argument is to not put this code before a
> user API. Again, I’d discuss with fellow community members before making
> what the other side perceives as unacceptable moves.
>
> From 2.8.0 perspective, it shouldn’t have landed there in the first place
> - I have been pushing for a release for a while with help only from a few
> members of the community. But if you say that it has no material impact on
> the user story, having a by-default switched-off feature that *doesn’t*
> destabilize the core release, I’d be willing to let it pass.
>
> +Vinod


[jira] [Created] (HDFS-10492) libhdfs++: Clean up minidfscluster tests

2016-06-06 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10492:
--

 Summary: libhdfs++: Clean up minidfscluster tests
 Key: HDFS-10492
 URL: https://issues.apache.org/jira/browse/HDFS-10492
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


A couple things need to be fixed with the minidfscluster tests

-Tests that are using hdfs_ext shouldn't be living in libhdfs-tests, any test 
in there should be able to be shared between both libraries.

-Tests added in HDFS-9890 relies on NDEBUG to turn on error simulation, ideally 
these should have some other switch so we can run error simulation on release 
builds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Vinod Kumar Vavilapalli
Folks,

It is truly disappointing how we are escalating situations that can be resolved 
through basic communication.

Things that shouldn’t have happened
- After a few objections were raised, commits should have simply stopped before 
restarting again but only after consensus
- Reverts (or revert and move to a feature-branch) shouldn’t have been 
unequivocally done without dropping a note / informing everyone / building 
consensus. And no, not even a release-manager gets this free pass. Not on 
branch-2, not on trunk, not anywhere.
- Freaking out on -1’s and reverts - we as a community need to be less 
stigmatic about -1s / reverts.

Trunk releases:
This is the other important bit about huge difference of expectations 
between the two sides w.r.t trunk and branching. Till now, we’ve never made 
releases out of trunk. So in-progress features that people deemed to not need a 
feature branch could go into trunk without much trouble. Given that we are now 
making releases off trunk, I can see (a) the RM saying "no, don’t put 
in-progress stuff and (b) the contributors saying “no we don’t want the 
overhead of a branch”. I’ve raised related topics (but only focusing on 
incompatible changes) before - http://markmail.org/message/m6x73t6srlchywsn 
 - but we never decided anything.

We need to at the least force a reset of expectations w.r.t how trunk and small 
/ medium / incompatible changes there are treated. We should hold off making a 
release off trunk before this gets fully discussed in the community and we all 
reach a consensus.

> * Without a user API, there's no way for people to use it, so not much
> advantage to having it in a release
> 
> Since the code is separate and probably won't break any existing code, I
> won't -1 if you want to include this in a release without a user API, but
> again, I question the utility of including code that can't be used.

Clearly, there are two sides to this argument. One side claims the absence of 
user-facing public / stable APIs, and that for all purposes this is dead-code 
for everyone other than the few early adopters who want to experiment with it. 
The other argument is to not put this code before a user API. Again, I’d 
discuss with fellow community members before making what the other side 
perceives as unacceptable moves.

From 2.8.0 perspective, it shouldn’t have landed there in the first place - I 
have been pushing for a release for a while with help only from a few members 
of the community. But if you say that it has no material impact on the user 
story, having a by-default switched-off feature that *doesn’t* destabilize the 
core release, I’d be willing to let it pass.

+Vinod

Re: Why there are so many revert operations on trunk?

2016-06-06 Thread larry mccay
This seems like something that is going to probably happen again if we
continue to cut releases from trunk.
I know that this has been discussed at length in a separate thread but I
think it would be good to recognize that it is the core of the issue here.

Either we:

* need to define what will happen on trunk in such circumstances and
clearly communicate an action before taking it on the dev@ list or
* we need to not introduce this sort of thrashing on trunk by releasing
from it directly

My humble 2 cents...

--larry


On Mon, Jun 6, 2016 at 1:56 PM, Andrew Wang 
wrote:

> To clarify what happened here, I moved the commits to a feature branch, not
> just reverting the commits. The intent was to make it easy to merge back in
> later, and also to unblock the 2.8 and 3.0 releases we've been trying very
> hard to wrap up for weeks. This doesn't slow down development since you can
> keep committing to a branch, and I did the git work to make it easy to
> merge back in alter. I'm also happy to review the merge if the concern is
> getting three +1s.
>
> In the comments on HDFS-9924, you can see comments from a month ago raising
> concerns about the API and also that this significant expansion of the HDFS
> API is being done on release branches. There is an explicit -1 on continued
> commits to trunk, and a request to move the commits to a feature branch.
> Similar concerns have been raised by multiple contributors on that JIRA.
> Yet, the commits remained in release branches, and new patches continued to
> be committed to release branches.
>
> There's no need to attribute malicious intent to slow down feature
> development; for some reason I keep seeing this accusation thrown around
> when there are many people chiming in on HDFS-9924 with concerns about the
> feature. Considering how it's expanding the HDFS API, this is also the kind
> of work that should go through a merge vote anyway to get more eyes on it.
>
> We've been converging on the API requirements, but until the user-facing
> API is ready, I don't see the advantage of having this code in a release
> branch. As noted by the contributors on this JIRA, it's new separate code,
> so there's little to no overhead to keeping a feature branch in sync.
>
> So, to sum it up, I moved these commits to a branch because:
>
> * The discussion about the user API is still ongoing, and there is
> currently no user-facing API
> * We are very late in the 2.8 and 3.0 release cycles, trying to do blocker
> burndown
> * This code is separate and thus easy to keep in sync on a branch and merge
> in later
> * Without a user API, there's no way for people to use it, so not much
> advantage to having it in a release
>
> Since the code is separate and probably won't break any existing code, I
> won't -1 if you want to include this in a release without a user API, but
> again, I question the utility of including code that can't be used.
>
> Thanks,
> Andrew
>


[jira] [Resolved] (HDFS-10484) Can not read file from java.io.IOException: Need XXX bytes, but only YYY bytes available

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10484.
---
Resolution: Cannot Reproduce

> Can not read file from java.io.IOException: Need XXX bytes, but only YYY  
> bytes available
> -
>
> Key: HDFS-10484
> URL: https://issues.apache.org/jira/browse/HDFS-10484
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
> Environment: Cloudera 4.1.2,  hadoop-hdfs-2.0.0+552-1.cdh4.1.2.p0.27
>Reporter: pt
>
> We are running CDH 4.1.2 distro and trying to read file from HDFS. It ends up 
> with exception @datanode saying
> 2016-06-02 10:43:26,354 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DatanodeRegistration(X.X.X.X, 
> storageID=DS-404876644-X.X.X.X-50010-1462535537579, infoPort=50075, 
> ipcPort=50020, storageInfo=lv=-40;cid=cluster18;nsid=2115086255;c=0):Got 
> exception while serving 
> BP-2091182050-X.X.X.X-1358362115729:blk_5037101550399368941_420502314 to 
> /X.X.X.X:58614
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:189)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> 2016-06-02 10:43:26,354 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> app112.rutarget.ru:50010:DataXceiver error processing READ_BLOCK operation 
> src: /X.X.X.X:58614 dest: /X.X.X.X:50010
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:189)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> FSCK shows file as being open for write, however hdfs client that handles 
> writes to this file closed it long time ago -- so file stucked in RBW for a 
> few last days. How can we get actual data  block in this case? I found only 
> binary .meta file on datanode but not actual block with data.
> -- 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Andrew Wang
To clarify what happened here, I moved the commits to a feature branch, not
just reverting the commits. The intent was to make it easy to merge back in
later, and also to unblock the 2.8 and 3.0 releases we've been trying very
hard to wrap up for weeks. This doesn't slow down development since you can
keep committing to a branch, and I did the git work to make it easy to
merge back in alter. I'm also happy to review the merge if the concern is
getting three +1s.

In the comments on HDFS-9924, you can see comments from a month ago raising
concerns about the API and also that this significant expansion of the HDFS
API is being done on release branches. There is an explicit -1 on continued
commits to trunk, and a request to move the commits to a feature branch.
Similar concerns have been raised by multiple contributors on that JIRA.
Yet, the commits remained in release branches, and new patches continued to
be committed to release branches.

There's no need to attribute malicious intent to slow down feature
development; for some reason I keep seeing this accusation thrown around
when there are many people chiming in on HDFS-9924 with concerns about the
feature. Considering how it's expanding the HDFS API, this is also the kind
of work that should go through a merge vote anyway to get more eyes on it.

We've been converging on the API requirements, but until the user-facing
API is ready, I don't see the advantage of having this code in a release
branch. As noted by the contributors on this JIRA, it's new separate code,
so there's little to no overhead to keeping a feature branch in sync.

So, to sum it up, I moved these commits to a branch because:

* The discussion about the user API is still ongoing, and there is
currently no user-facing API
* We are very late in the 2.8 and 3.0 release cycles, trying to do blocker
burndown
* This code is separate and thus easy to keep in sync on a branch and merge
in later
* Without a user API, there's no way for people to use it, so not much
advantage to having it in a release

Since the code is separate and probably won't break any existing code, I
won't -1 if you want to include this in a release without a user API, but
again, I question the utility of including code that can't be used.

Thanks,
Andrew


Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Jitendra Pandey
Colin raised the -1 demanding a design document. The document was added the 
very next day. There were constructive discussions on the design. There was a 
demand for listenable future or futures with callback, which was accepted to 
accommodate. Rest of the work having been completed, there was no need to 
revert. Andrew’s objection was primarily against releasing in 2.8 without the 
aforementioned change in API, which is reasonable and, IMO, it should be 
possible to make the above improvement within 2.8 timeline. 

On Jun 6, 2016, at 10:13 AM, Chris Douglas  wrote:

> Reading through HDFS-9924, a request for a design doc- and a -1 on
> committing to trunk- was raised in mid-May, but commits to trunk
> continued. Why is that? Shouldn't this have paused while the details
> were discussed? Branching is neutral to the pace of feature
> development, but consensus on the result is required. Working through
> possibilities in a branch- or in multiple branches- seems like a
> reasonable way to determine which approach has support and code to
> back it.
> 
> Reverting code is not "illegal"; the feature will be in/out of any
> release by appealing to bylaws. Our rules exist to facilitate
> consensus, not declare it a fiat accompli.
> 
> An RM only exists by creating an RC. Someone can declare themselves
> Grand Marshall of trunk and stomp around in a fancy hat, but it
> doesn't affect anything. -C
> 
> 
> On Mon, Jun 6, 2016 at 9:36 AM, Junping Du  wrote:
>> Thanks Aaron for pointing it out. I didn't see any consensus on HDFS-9924 so 
>> I think we should bring it here with broader audiences for more discussions.
>> 
>> I saw several very bad practices here:
>> 
>> 1. committer (no need to say who) revert all commits from trunk without 
>> making consensus with all related contributors/committers.
>> 
>> 2. Someone's comments on feature branch are very misleading... If I didn't 
>> remember wrong, feature development doesn't have to go through feature 
>> branch which is just an optional process. This creative process of feature 
>> branch and branch committer - I believe the intention is trying to 
>> accelerate features development but not to slow them down.
>> 
>> 3. Someone (again, no need to say who) seems to claim himself as RM for 
>> trunk. I don't think we need any RM for trunk. Even for RM of 3.0.0-alpha, I 
>> think we need someone else who demonstrates he/she is more responsible, work 
>> hardly and carefully and open communication with all community. Only through 
>> this, the success of Hadoop in age of 3.0 are guranteed.
>> 
>> 
>> Thanks,
>> 
>> 
>> Junping
>> 
>> 
>> 
>> From: Aaron T. Myers 
>> Sent: Monday, June 06, 2016 4:46 PM
>> To: Junping Du
>> Cc: Andrew Wang; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
>> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
>> Subject: Re: Why there are so many revert operations on trunk?
>> 
>> Junping,
>> 
>> All of this is being discussed on HDFS-9924. Suggest you follow the 
>> conversation there.
>> 
>> --
>> Aaron T. Myers
>> Software Engineer, Cloudera
>> 
>> On Mon, Jun 6, 2016 at 7:20 AM, Junping Du 
>> > wrote:
>> Hi Andrew,
>> 
>> I just noticed you revert 8 commits on trunk last Friday:
>> 
>> HADOOP-13226
>> 
>> HDFS-10430
>> 
>> HDFS-10431
>> 
>> HDFS-10390
>> 
>> HADOOP-13168
>> 
>> HDFS-10390
>> 
>> HADOOP-13168
>> 
>> HDFS-10346
>> 
>> HADOOP-12957
>> 
>> HDFS-10224
>> 
>>   And I didn't see you have any comments on JIRA or email discussion before 
>> you did this. I don't think we are legally allowed to do this even as 
>> committer/PMC member. Can you explain what's your intention to do this?
>> 
>>   BTW, thanks Nicolas to revert all these "illegal" revert operations.
>> 
>> 
>> 
>> Thanks,
>> 
>> 
>> Junping
>> 
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Chris Douglas
Reading through HDFS-9924, a request for a design doc- and a -1 on
committing to trunk- was raised in mid-May, but commits to trunk
continued. Why is that? Shouldn't this have paused while the details
were discussed? Branching is neutral to the pace of feature
development, but consensus on the result is required. Working through
possibilities in a branch- or in multiple branches- seems like a
reasonable way to determine which approach has support and code to
back it.

Reverting code is not "illegal"; the feature will be in/out of any
release by appealing to bylaws. Our rules exist to facilitate
consensus, not declare it a fiat accompli.

An RM only exists by creating an RC. Someone can declare themselves
Grand Marshall of trunk and stomp around in a fancy hat, but it
doesn't affect anything. -C


On Mon, Jun 6, 2016 at 9:36 AM, Junping Du  wrote:
> Thanks Aaron for pointing it out. I didn't see any consensus on HDFS-9924 so 
> I think we should bring it here with broader audiences for more discussions.
>
> I saw several very bad practices here:
>
> 1. committer (no need to say who) revert all commits from trunk without 
> making consensus with all related contributors/committers.
>
> 2. Someone's comments on feature branch are very misleading... If I didn't 
> remember wrong, feature development doesn't have to go through feature branch 
> which is just an optional process. This creative process of feature branch 
> and branch committer - I believe the intention is trying to accelerate 
> features development but not to slow them down.
>
> 3. Someone (again, no need to say who) seems to claim himself as RM for 
> trunk. I don't think we need any RM for trunk. Even for RM of 3.0.0-alpha, I 
> think we need someone else who demonstrates he/she is more responsible, work 
> hardly and carefully and open communication with all community. Only through 
> this, the success of Hadoop in age of 3.0 are guranteed.
>
>
> Thanks,
>
>
> Junping
>
>
> 
> From: Aaron T. Myers 
> Sent: Monday, June 06, 2016 4:46 PM
> To: Junping Du
> Cc: Andrew Wang; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> Subject: Re: Why there are so many revert operations on trunk?
>
> Junping,
>
> All of this is being discussed on HDFS-9924. Suggest you follow the 
> conversation there.
>
> --
> Aaron T. Myers
> Software Engineer, Cloudera
>
> On Mon, Jun 6, 2016 at 7:20 AM, Junping Du 
> > wrote:
> Hi Andrew,
>
>  I just noticed you revert 8 commits on trunk last Friday:
>
> HADOOP-13226
>
> HDFS-10430
>
> HDFS-10431
>
> HDFS-10390
>
> HADOOP-13168
>
> HDFS-10390
>
> HADOOP-13168
>
> HDFS-10346
>
> HADOOP-12957
>
> HDFS-10224
>
>And I didn't see you have any comments on JIRA or email discussion before 
> you did this. I don't think we are legally allowed to do this even as 
> committer/PMC member. Can you explain what's your intention to do this?
>
>BTW, thanks Nicolas to revert all these "illegal" revert operations.
>
>
>
> Thanks,
>
>
> Junping
>

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Junping Du
Thanks Aaron for pointing it out. I didn't see any consensus on HDFS-9924 so I 
think we should bring it here with broader audiences for more discussions.

I saw several very bad practices here:

1. committer (no need to say who) revert all commits from trunk without making 
consensus with all related contributors/committers.

2. Someone's comments on feature branch are very misleading... If I didn't 
remember wrong, feature development doesn't have to go through feature branch 
which is just an optional process. This creative process of feature branch and 
branch committer - I believe the intention is trying to accelerate features 
development but not to slow them down.

3. Someone (again, no need to say who) seems to claim himself as RM for trunk. 
I don't think we need any RM for trunk. Even for RM of 3.0.0-alpha, I think we 
need someone else who demonstrates he/she is more responsible, work hardly and 
carefully and open communication with all community. Only through this, the 
success of Hadoop in age of 3.0 are guranteed.


Thanks,


Junping



From: Aaron T. Myers 
Sent: Monday, June 06, 2016 4:46 PM
To: Junping Du
Cc: Andrew Wang; common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: Why there are so many revert operations on trunk?

Junping,

All of this is being discussed on HDFS-9924. Suggest you follow the 
conversation there.

--
Aaron T. Myers
Software Engineer, Cloudera

On Mon, Jun 6, 2016 at 7:20 AM, Junping Du 
> wrote:
Hi Andrew,

 I just noticed you revert 8 commits on trunk last Friday:

HADOOP-13226

HDFS-10430

HDFS-10431

HDFS-10390

HADOOP-13168

HDFS-10390

HADOOP-13168

HDFS-10346

HADOOP-12957

HDFS-10224

   And I didn't see you have any comments on JIRA or email discussion before 
you did this. I don't think we are legally allowed to do this even as 
committer/PMC member. Can you explain what's your intention to do this?

   BTW, thanks Nicolas to revert all these "illegal" revert operations.



Thanks,


Junping



Re: Why there are so many revert operations on trunk?

2016-06-06 Thread Aaron T. Myers
Junping,

All of this is being discussed on HDFS-9924. Suggest you follow the
conversation there.

--
Aaron T. Myers
Software Engineer, Cloudera

On Mon, Jun 6, 2016 at 7:20 AM, Junping Du  wrote:

> Hi Andrew,
>
>  I just noticed you revert 8 commits on trunk last Friday:
>
> HADOOP-13226
>
> HDFS-10430
>
> HDFS-10431
>
> HDFS-10390
>
> HADOOP-13168
>
> HDFS-10390
>
> HADOOP-13168
>
> HDFS-10346
>
> HADOOP-12957
>
> HDFS-10224
>
>And I didn't see you have any comments on JIRA or email discussion
> before you did this. I don't think we are legally allowed to do this even
> as committer/PMC member. Can you explain what's your intention to do this?
>
>BTW, thanks Nicolas to revert all these "illegal" revert operations.
>
>
>
> Thanks,
>
>
> Junping
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-06-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/

No changes




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   org.apache.hadoop.minikdc.MiniKdc.stop() calls Thread.sleep() with a 
lock held At MiniKdc.java:lock held At MiniKdc.java:[line 345] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   Redundant nullcheck of execTypeRequest, which is known to be non-null in 
org.apache.hadoop.yarn.api.records.ResourceRequest.equals(Object) Redundant 
null check at ResourceRequest.java:is known to be non-null in 
org.apache.hadoop.yarn.api.records.ResourceRequest.equals(Object) Redundant 
null check at ResourceRequest.java:[line 361] 

Failed junit tests :

   hadoop.net.TestDNS 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.yarn.server.resourcemanager.TestClientRMTokens 
   hadoop.yarn.server.resourcemanager.TestAMAuthorization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.client.api.impl.TestDistributedScheduling 
   hadoop.yarn.client.TestGetGroups 
   hadoop.mapred.TestMiniMRChildTask 
   hadoop.mapred.TestMRCJCFileOutputCommitter 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
   org.apache.hadoop.yarn.client.cli.TestYarnCLI 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.client.api.impl.TestNMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/diff-javadoc-javadoc-root.txt
  [2.3M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [116K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [908K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [92K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/49/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: 

Why there are so many revert operations on trunk?

2016-06-06 Thread Junping Du
Hi Andrew,

 I just noticed you revert 8 commits on trunk last Friday:

HADOOP-13226

HDFS-10430

HDFS-10431

HDFS-10390

HADOOP-13168

HDFS-10390

HADOOP-13168

HDFS-10346

HADOOP-12957

HDFS-10224

   And I didn't see you have any comments on JIRA or email discussion before 
you did this. I don't think we are legally allowed to do this even as 
committer/PMC member. Can you explain what's your intention to do this?

   BTW, thanks Nicolas to revert all these "illegal" revert operations.



Thanks,


Junping


[jira] [Created] (HDFS-10491) libhdfs++: Implement GetFsStats

2016-06-06 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-10491:


 Summary: libhdfs++: Implement GetFsStats
 Key: HDFS-10491
 URL: https://issues.apache.org/jira/browse/HDFS-10491
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1308

2016-06-06 Thread Apache Jenkins Server
See 

Changes:

[szetszwo] Revert "Revert "HDFS-10224. Implement asynchronous rename for

[szetszwo] Revert "Revert "HADOOP-12957. Limit the number of outstanding async

[szetszwo] Revert "Revert "HDFS-10346. Implement asynchronous

[szetszwo] Revert "Revert "HADOOP-13168. Support Future.get with timeout in ipc

[szetszwo] Revert "Revert "HDFS-10390. Implement asynchronous 
setAcl/getAclStatus

[szetszwo] Revert "Revert "HDFS-10431 Refactor and speedup TestAsyncDFSRename. 

[szetszwo] Revert "Revert "HDFS-10430. Reuse FileSystem#access in TestAsyncDFS.

[szetszwo] Revert "Revert "HADOOP-13226 Support async call retry and failover.""

--
[...truncated 6421 lines...]
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.364 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.357 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.73 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestDataTransferProtocol
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.767 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.113 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.812 sec - in 
org.apache.hadoop.hdfs.TestDataTransferProtocol
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.895 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.228 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 170.404 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.217 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.63 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.543 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.646 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.172 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.util.TestStripedBlockUtil
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.878 sec - in 
org.apache.hadoop.hdfs.util.TestStripedBlockUtil
Running org.apache.hadoop.hdfs.util.TestCombinedHostsFileReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec - in 
org.apache.hadoop.hdfs.util.TestCombinedHostsFileReader
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.256 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.634 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.266 sec - in