Hadoop-Hdfs-trunk - Build # 3059 - Still Failing

2016-04-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3059/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5352 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:00 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:26 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.066 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:30 h
[INFO] Finished at: 2016-04-23T04:25:01+00:00
[INFO] Final Memory: 57M/701M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty

Error Message:
mv should have succeeded expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: mv should have succeeded expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.TestDFSShell.testMoveWithTargetPortEmpty(TestDFSShell.java:585)




Build failed in Jenkins: Hadoop-Hdfs-trunk #3059

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-13033. Add missing Javadoc entries to Interns.java. 
Contributed

[jing9] HDFS-9427. HDFS should not default to ephemeral ports. Contributed by

--
[...truncated 5159 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.242 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.973 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.029 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.114 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.161 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.109 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.788 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.833 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.258 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.84 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.9 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.605 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.497 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.705 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.737 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.32 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.315 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 139.019 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.178 sec - 
in org.apache.hadoop.hdfs.TestBlockReaderLocal
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.614 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.709 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.325 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.031 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.302 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.866 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, 

Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
Tx for your replies, Andrew.

>> For exit criteria, how about we time box it? My plan was to do monthly
> alphas through the summer, leading up to beta in late August / early Sep.
> At that point we freeze and stabilize for GA in Nov/Dec.


Time-boxing is a reasonable exit-criterion.


> In this case, does trunk-incompat essentially become the new trunk? Or are
> we treating trunk-incompat as a feature branch, which periodically merges
> changes from trunk?


It’s the later. Essentially
 - trunk-incompat = trunk + only incompatible changes, periodically kept 
up-to-date to trunk
 - trunk is always ready to ship
 - and no compatible code gets left behind

The reason for my proposal like this is to address the tension between “there 
is lot of compatible code in trunk that we are not shipping” and “don’t ship 
trunk, it has incompatibilities”. With this, we will not have (compatible) code 
not getting shipped to users.

Obviously, we can forget about all of my proposal completely if everyone puts 
in all compatible code into branch-2 / branch-3 or whatever the main releasable 
branch is. This didn’t work in practice, have seen this not happening 
prominently during 0.21, and now 3.x.

There is another related issue - "my feature is nearly ready, so I’ll just 
merge it into trunk as we don’t release that anyways, but not the current 
releasable branch - I’m lazy to fix the last few stability related issues”. 
With this, we will (should) get more disciplined, take feature stability on a 
branch seriously and merge a feature branch only when it is truly ready!

> For 3.x, my strawman was to release off trunk for the alphas, then branch a
> branch-3 for the beta and onwards.


Repeating above, I’m proposing continuing to make GA 3.x releases also off of 
trunk! This way only incompatible changes don’t get shipped to users - by 
design! Eventually, trunk-incompat will be latest 3.x GA + enough incompatible 
code to warrant a 4.x, 5.x etc.

+Vinod

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1129

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[raviprak] Revert "HADOOP-12563. Updated utility (dtutil) to create/modify token

[wangda] YARN-4846. Fix random failures for

[kihwal] HADOOP-13052. ChecksumFileSystem mishandles crc file permissions.

[aajisaka] HADOOP-13033. Add missing Javadoc entries to Interns.java. 
Contributed

--
[...truncated 5806 lines...]
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:877)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:726)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:721)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.821 sec - 
in org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.659 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.14 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.053 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.099 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 126.952 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.51 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.7 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.862 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec - in 
org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.293 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.845 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.424 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Hadoop-Hdfs-trunk-Java8 - Build # 1129 - Still Failing

2016-04-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1129/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5999 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:11 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:14 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.092 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:18 h
[INFO] Finished at: 2016-04-23T01:47:23+00:00
[INFO] Final Memory: 56M/432M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
The stream is closed

Stack Trace:
java.io.IOException: The stream is closed
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:877)
at 
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:726)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:721)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:37413,DS-95ae003a-bb9f-4b5f-bcb3-4cf1eed3b682,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:58520,DS-9f932289-2cd6-465d-9434-d66f473c6ed9,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:37413,DS-95ae003a-bb9f-4b5f-bcb3-4cf1eed3b682,DISK],
 

Re: Looking to a Hadoop 3 release

2016-04-22 Thread Allen Wittenauer

> On Apr 22, 2016, at 6:10 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Nope.
> 
> I’m proposing making a new 3.x release (as has been discussed in this thread) 
> off today’s trunk (instead of creating a fresh branch-3) and create a new 
> trunk-incompt where incompatible changes that we don’t want in 3.x go.
> 
> This is mainly to avoid repeating the “we are not releasing 3.x off trunk” 
> issue when we start thinking about 4.x or any such major release in the 
> future.

The only difference between “we aren’t releasing 4.x off of trunk” and 
“we aren’t releasing 4.x off of trunk-incompat” is 10 characters.

Re: Looking to a Hadoop 3 release

2016-04-22 Thread Andrew Wang
Great comments Vinod, thanks for replying.

Since trunk is a superset of branch-2.8, I think the two efforts are mostly
aligned. The 2.8 blockers are likely also 3.0 blockers. For example, the
create-release and L JIRAs I mentioned are in this camp. The difference
between the two is the expectation as to the level of quality. Once we get
create-release and L settled, I think it's ready for an alpha. Yes, this
means we ship with some known issues, but right now there's no 3.0 artifact
for downstreams to compile and test against. Considering that we're
shipping incompatible changes, I want to give downstreams as much
opportunity to give feedback as possible.

While welcoming the push for alphas, i think we should set some exit
> criteria. Otherwise, I can imagine us doing 3/4/5 alpha releases, and then
> getting restless about calling it beta or GA of whatever. Essentially,
> instead of today’s questions as to "why we aren’t doing a 3.x release",
> we’d be fielding a "why is 3.x still considered alpha” question. This
> happened with 2.x alpha releases too and it wasn’t fun.
>
> For exit criteria, how about we time box it? My plan was to do monthly
alphas through the summer, leading up to beta in late August / early Sep.
At that point we freeze and stabilize for GA in Nov/Dec.

I think we all have an interest in declaring beta/GA, no one wants eternal
alpha releases.

On an unrelated note, offline I was pitching to a bunch of contributors
> another idea to deal with rotting trunk post 3.x: *Make 3.x releases off of
> trunk directly*.
>
> What this gains us is that
>  - Trunk is always nearly stable or nearly ready for releases
>  - We no longer have some code lying around in some branch (today’s trunk)
> that is not releasable because it gets mixed with other undesirable and
> incompatible changes.
>  - This needs to be coupled with more discipline on individual features -
> medium to to large features are always worked upon in branches and get
> merged into trunk (and a nearing release!) when they are ready
>  - All incompatible changes go into some sort of a trunk-incompat branch
> and stay there till we accumulate enough of those to warrant another major
> release.
>

In this case, does trunk-incompat essentially become the new trunk? Or are
we treating trunk-incompat as a feature branch, which periodically merges
changes from trunk?

Linux has a "next" branch for separate from master for integrating pending
feature branches. I think this is a good model, and would be even better if
we published artifacts to assist with testing. However, that depends on
someone stepping up to be the maintainer of the integration branch.

I really like a more stringent policy around branch merges and new feature
development. That'd be great.

For 3.x, my strawman was to release off trunk for the alphas, then branch a
branch-3 for the beta and onwards.

Best,
Andrew


Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
Nope.

I’m proposing making a new 3.x release (as has been discussed in this thread) 
off today’s trunk (instead of creating a fresh branch-3) and create a new 
trunk-incompt where incompatible changes that we don’t want in 3.x go.

This is mainly to avoid repeating the “we are not releasing 3.x off trunk” 
issue when we start thinking about 4.x or any such major release in the future.

We’ll do 2.8.x independently and later figure out if 2.9 is needed or not.

+Vinod

> On Apr 22, 2016, at 5:59 PM, Allen Wittenauer  wrote:
> 
> 
>> On Apr 22, 2016, at 5:38 PM, Vinod Kumar Vavilapalli  
>> wrote:
>> 
>> On an unrelated note, offline I was pitching to a bunch of contributors 
>> another idea to deal with rotting trunk post 3.x: *Make 3.x releases off of 
>> trunk directly*.
>> 
>> What this gains us is that
>> - Trunk is always nearly stable or nearly ready for releases
>> - We no longer have some code lying around in some branch (today’s trunk) 
>> that is not releasable because it gets mixed with other undesirable and 
>> incompatible changes.
>> - This needs to be coupled with more discipline on individual features - 
>> medium to to large features are always worked upon in branches and get 
>> merged into trunk (and a nearing release!) when they are ready
>> - All incompatible changes go into some sort of a trunk-incompat branch and 
>> stay there till we accumulate enough of those to warrant another major 
>> release.
>> 
>> Thoughts?
> 
>   Unless I’m missing something, all this proposal does is (using today’s 
> branch names) effectively rename trunk to trunk-incompat and branch-2 to 
> trunk.  I’m unclear how moving "rotting trunk” to “rotting trunk-incompat” is 
> really progress.
> 
> 



Re: Looking to a Hadoop 3 release

2016-04-22 Thread Allen Wittenauer

> On Apr 22, 2016, at 5:38 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> On an unrelated note, offline I was pitching to a bunch of contributors 
> another idea to deal with rotting trunk post 3.x: *Make 3.x releases off of 
> trunk directly*.
> 
> What this gains us is that
> - Trunk is always nearly stable or nearly ready for releases
> - We no longer have some code lying around in some branch (today’s trunk) 
> that is not releasable because it gets mixed with other undesirable and 
> incompatible changes.
> - This needs to be coupled with more discipline on individual features - 
> medium to to large features are always worked upon in branches and get merged 
> into trunk (and a nearing release!) when they are ready
> - All incompatible changes go into some sort of a trunk-incompat branch and 
> stay there till we accumulate enough of those to warrant another major 
> release.
> 
> Thoughts?

Unless I’m missing something, all this proposal does is (using today’s 
branch names) effectively rename trunk to trunk-incompat and branch-2 to trunk. 
 I’m unclear how moving "rotting trunk” to “rotting trunk-incompat” is really 
progress.



Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
Hi,

While welcoming the push for alphas, i think we should set some exit criteria. 
Otherwise, I can imagine us doing 3/4/5 alpha releases, and then getting 
restless about calling it beta or GA of whatever. Essentially, instead of 
today’s questions as to "why we aren’t doing a 3.x release", we’d be fielding a 
"why is 3.x still considered alpha” question. This happened with 2.x alpha 
releases too and it wasn’t fun.

On an unrelated note, offline I was pitching to a bunch of contributors another 
idea to deal with rotting trunk post 3.x: *Make 3.x releases off of trunk 
directly*.

What this gains us is that
 - Trunk is always nearly stable or nearly ready for releases
 - We no longer have some code lying around in some branch (today’s trunk) that 
is not releasable because it gets mixed with other undesirable and incompatible 
changes.
 - This needs to be coupled with more discipline on individual features - 
medium to to large features are always worked upon in branches and get merged 
into trunk (and a nearing release!) when they are ready
 - All incompatible changes go into some sort of a trunk-incompat branch and 
stay there till we accumulate enough of those to warrant another major release.

Thoughts?

+Vinod


> On Apr 21, 2016, at 4:31 PM, Andrew Wang  wrote:
> 
> Hi folks,
> 
> Very optimistically, we're still on track for a 3.0 alpha this month.
> Here's a JIRA query for 3.0 and 2.8:
> 
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20MAPREDUCE%2C%20YARN)%20AND%20%22Target%20Version%2Fs%22%20in%20(3.0.0%2C%202.8.0)%20AND%20statusCategory%20not%20in%20(Complete)%20ORDER%20BY%20priority
> 
> I think two of these are true alpha blockers: HADOOP-12892 and
> HADOOP-12893. I'm trying to help push both of those forward.
> 
> For the rest, I think it's probably okay to delay until the next alpha,
> since we're planning a few alphas leading up to beta. That said, if you are
> the owner of a Blocker targeted at 3.0.0, I'd encourage reviving those
> patches. The earlier the better for incompatible changes.
> 
> In all likelihood, this first release will slip into early May, but I'll be
> disappointed if we don't have an RC out before ApacheCon.
> 
> Best,
> Andrew
> 
> On Mon, Feb 22, 2016 at 3:19 PM, Colin P. McCabe  wrote:
> 
>> I think starting a 3.0 alpha soon would be a great idea.  As some
>> other people commented, this would come with no compatibility
>> guarantees, so that we can iron out any issues.
>> 
>> Colin
>> 
>> On Mon, Feb 22, 2016 at 1:26 PM, Zhe Zhang  wrote:
>>> Thanks Andrew for driving the effort!
>>> 
>>> +1 (non-binding) on starting the 3.0 release process now with 3.0 as an
>>> alpha.
>>> 
>>> I wanted to echo Andrew's point that backporting EC to branch-2 is a lot
>> of
>>> work. Considering that no concrete backporting plan has been proposed, it
>>> seems quite uncertain whether / when it can be released in 2.9. I think
>> we
>>> should rather concentrate our EC dev efforts to harden key features under
>>> the follow-on umbrella HDFS-8031 and make it solid for a 3.0 release.
>>> 
>>> Sincerely,
>>> Zhe
>>> 
>>> On Mon, Feb 22, 2016 at 9:25 AM Colin P. McCabe 
>> wrote:
>>> 
 +1 for a release of 3.0.  There are a lot of significant,
 compatibility-breaking, but necessary changes in this release... we've
 touched on some of them in this thread.
 
 +1 for a parallel release of 2.8 as well.  I think we are pretty close
 to this, barring a dozen or so blockers.
 
 best,
 Colin
 
 On Mon, Feb 22, 2016 at 2:56 AM, Steve Loughran >> 
 wrote:
> 
>> On 20 Feb 2016, at 15:34, Junping Du  wrote:
>> 
>> Shall we consolidate effort for 2.8.0 and 3.0.0? It doesn't sounds
 reasonable to have two alpha releases to go in parallel. Is EC feature
>> the
 main motivation of releasing hadoop 3 here? If so, I don't understand
>> why
 this feature cannot land on 2.8.x or 2.9.x as an alpha feature.
> 
> 
> 
>> If we release 3.0 in a month like plan proposed below, it means we
>> will
 have 4 active releases going in parallel - two alpha releases (2.8 and
>> 3.0)
 and two stable releases (2.6.x and 2.7.x). It brings a lot of
>> challenges in
 issues tracking and patch committing, not even mention the tremendous
 effort of release verification and voting.
>> I would like to propose to wait 2.8 release become stable (may be 2nd
 release in 2.8 branch cause first release is alpha due to discussion in
 another email thread), then we can move to 3.0 as the only alpha
>> release.
 In the meantime, we can bring more significant features (like ATS v2,
>> etc.)
 to trunk and consolidate stable releases in 2.6.x and 2.7.x. I believe
>> that
 make life easier. :)
>> Thoughts?
>> 
> 
> 2.8.0 is 

Re: Looking to a Hadoop 3 release

2016-04-22 Thread Vinod Kumar Vavilapalli
I kind of echo Junping’s comment too.

While 2.8 and 3.0 don’t need to be serialized in theory, in practice I’m 
desperately looking for help on 2.8.0. We haven’t been converging on 2.8.0 what 
with 50+ blocker / critical patches still unfinished. If postponing 3.x alpha 
to after a 2.8.0 alpha means undivided attention from the community, I’d 
strongly root for such a proposal.

Thanks
+Vinod

> On Feb 20, 2016, at 9:07 PM, Andrew Wang  wrote:
> 
> Hi Junping, thanks for the mail, inline:
> 
> On Sat, Feb 20, 2016 at 7:34 AM, Junping Du  wrote:
> 
>> Shall we consolidate effort for 2.8.0 and 3.0.0? It doesn't sounds
>> reasonable to have two alpha releases to go in parallel. Is EC feature the
>> main motivation of releasing hadoop 3 here? If so, I don't understand why
>> this feature cannot land on 2.8.x or 2.9.x as an alpha feature.
>> 
> 
> EC is one motivation, there are others too (JDK8, shell scripts, jar
> bumps). I'm open to EC going into branch-2, but I haven't seen any
> backporting yet and it's a lot of code.
> 
> 
>> If we release 3.0 in a month like plan proposed below, it means we will
>> have 4 active releases going in parallel - two alpha releases (2.8 and 3.0)
>> and two stable releases (2.6.x and 2.7.x). It brings a lot of challenges in
>> issues tracking and patch committing, not even mention the tremendous
>> effort of release verification and voting.
>> I would like to propose to wait 2.8 release become stable (may be 2nd
>> release in 2.8 branch cause first release is alpha due to discussion in
>> another email thread), then we can move to 3.0 as the only alpha release.
>> In the meantime, we can bring more significant features (like ATS v2, etc.)
>> to trunk and consolidate stable releases in 2.6.x and 2.7.x. I believe that
>> make life easier. :)
>> Thoughts?
>> 
>> Based on some earlier mails in this chain, I was planning to release off
> trunk. This way we avoid having to commit to yet-another-branch, and makes
> tracking easier since trunk will always be a superset of the branch-2's.
> This does mean though that trunk needs to be stable, and we need to be more
> judicious with branch merges, and quickly revert broken code.
> 
> Regarding RM/voting/validation efforts, Steve mentioned some scripts that
> he uses to automate Slider releases. This is something I'd like to bring
> over to Hadoop. Ideally, publishing an RC is push-button, and it comes with
> automated validation. I think this will help with the overhead. Also, since
> these will be early alphas, and there will be a lot of them, I'm not
> expecting anyone to do endurance runs on a large cluster before casting a
> +1.
> 
> Best,
> Andrew



Build failed in Jenkins: Hadoop-Hdfs-trunk #3058

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[raviprak] Revert "HADOOP-12563. Updated utility (dtutil) to create/modify token

[wangda] YARN-4846. Fix random failures for

[kihwal] HADOOP-13052. ChecksumFileSystem mishandles crc file permissions.

--
[...truncated 5528 lines...]
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.142 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.318 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.447 sec - in 
org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.315 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.942 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.371 sec - in 
org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.341 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.986 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.609 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.159 sec - in 
org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.776 sec - in 
org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.852 sec - in 
org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.476 sec - in 
org.apache.hadoop.cli.TestHDFSCLI
Running org.apache.hadoop.cli.TestCryptoAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.983 sec - in 
org.apache.hadoop.cli.TestCryptoAdminCLI
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.014 sec - in 
org.apache.hadoop.cli.TestXAttrCLI
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.435 sec - in 
org.apache.hadoop.cli.TestErasureCodingCLI
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.675 sec - in 
org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.602 sec - in 
org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.697 sec - in 
org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.852 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.576 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.096 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.72 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.277 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.634 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.193 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 

Hadoop-Hdfs-trunk - Build # 3058 - Still Failing

2016-04-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3058/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5721 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:08 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:25 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.071 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:29 h
[INFO] Finished at: 2016-04-23T00:26:12+00:00
[INFO] Final Memory: 56M/677M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
37 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:653)


FAILED:  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.testNameDirError

Error Message:
org/apache/hadoop/util/ShutdownHookManager

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/util/ShutdownHookManager
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at 

Re: [Release thread] 2.8.0 release activities

2016-04-22 Thread Vinod Kumar Vavilapalli
We are not converging - there’s still 58 more. I need help from the community 
in addressing / review 2.8.0 blockers. If folks can start with reviewing Patch 
available tickets, that’ll be great.


Thanks
+Vinod

> On Apr 4, 2016, at 2:16 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Here we go again. The blocker / critical tickets ballooned up a lot, I see 64 
> now! : https://issues.apache.org/jira/issues/?filter=12334985
> 
> Also, the docs target (mvn package -Pdocs -DskipTests) is completely busted 
> on branch-2, I figured I have to backport a whole bunch of patches that are 
> only on trunk, and may be more fixes on top of that *sigh*
> 
> I’ll start pushing for progress for an RC in a week or two.
> 
> Any help towards this, reviewing/committing outstanding patches and 
> contributing to open items is greatly appreciated.
> 
> Thanks
> +Vinod
> 
>> On Feb 9, 2016, at 11:51 AM, Vinod Kumar Vavilapalli  
>> wrote:
>> 
>> Sure. The last time I checked, there were 20 odd blocker/critical tickets 
>> too that’ll need some of my time.
>> 
>> Given that, if you can get them in before a week, we should be good.
>> 
>> +Vinod
>> 
>>> On Feb 5, 2016, at 1:19 PM, Subramaniam V K  wrote:
>>> 
>>> Vinod,
>>> 
>>> Thanks for initiating the 2.8 release thread. We are in late review stages
>>> for YARN-4420 (Add REST API for listing reservations) and YARN-2575 (Adding
>>> ACLs for reservation system), hoping to get them by next week. Any chance
>>> you can put off cutting 2.8 by a week as we are planning to deploy
>>> ReservationSystem and these are critical for that?
>>> 
>>> Cheers,
>>> Subru
>>> 
>>> On Thu, Feb 4, 2016 at 3:17 PM, Chris Nauroth 
>>> wrote:
>>> 
 FYI, I've just needed to raise HDFS-9761 to blocker status for the 2.8.0
 release.
 
 --Chris Nauroth
 
 
 
 
 On 2/3/16, 6:19 PM, "Karthik Kambatla"  wrote:
 
> Thanks Vinod. Not labeling 2.8.0 stable sounds perfectly reasonable to me.
> Let us not call it alpha or beta though, it is quite confusing. :)
> 
> On Wed, Feb 3, 2016 at 8:17 PM, Gangumalla, Uma  
> wrote:
> 
>> Thanks Vinod. +1 for 2.8 release start.
>> 
>> Regards,
>> Uma
>> 
>> On 2/3/16, 3:53 PM, "Vinod Kumar Vavilapalli" 
>> wrote:
>> 
>>> Seems like all the features listed in the Roadmap wiki are in. I¹m
>> going
>>> to try cutting an RC this weekend for a first/non-stable release off of
>>> branch-2.8.
>>> 
>>> Let me know if anyone has any objections/concerns.
>>> 
>>> Thanks
>>> +Vinod
>>> 
 On Nov 25, 2015, at 5:59 PM, Vinod Kumar Vavilapalli
  wrote:
 
 Branch-2.8 is created.
 
 As mentioned before, the goal on branch-2.8 is to put improvements /
 fixes to existing features with a goal of converging on an alpha
>> release
 soon.
 
 Thanks
 +Vinod
 
 
> On Nov 25, 2015, at 5:30 PM, Vinod Kumar Vavilapalli
>  wrote:
> 
> Forking threads now in order to track all things related to the
> release.
> 
> Creating the branch now.
> 
> Thanks
> +Vinod
> 
> 
>> On Nov 25, 2015, at 11:37 AM, Vinod Kumar Vavilapalli
>>  wrote:
>> 
>> I think we¹ve converged at a high level w.r.t 2.8. And as I just
>> sent
>> out an email, I updated the Roadmap wiki reflecting the same:
>> https://wiki.apache.org/hadoop/Roadmap
>> 
>> 
>> I plan to create a 2.8 branch EOD today.
>> 
>> The goal for all of us should be to restrict improvements & fixes
>> to
>> only (a) the feature-set documented under 2.8 in the RoadMap wiki
>> and
>> (b) other minor features that are already in 2.8.
>> 
>> Thanks
>> +Vinod
>> 
>> 
>>> On Nov 11, 2015, at 12:13 PM, Vinod Kumar Vavilapalli
>>> > wrote:
>>> 
>>> - Cut a branch about two weeks from now
>>> - Do an RC mid next month (leaving ~4weeks since branch-cut)
>>> - As with 2.7.x series, the first release will still be called as
>>> early / alpha release in the interest of
>>> ‹ gaining downstream adoption
>>> ‹ wider testing,
>>> ‹ yet reserving our right to fix any inadvertent
>> incompatibilities
>>> introduced.
>> 
> 
 
>>> 
>> 
>> 
 
 
>> 
> 



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1128

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-9555. LazyPersistFileScrubber should still sleep if there are

--
[...truncated 5787 lines...]
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.471 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 14.523 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.55 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.994 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.183 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.133 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.966 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.845 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.682 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.445 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.224 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.107 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.241 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.369 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.959 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.259 sec - in 
org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring 

Hadoop-Hdfs-trunk-Java8 - Build # 1128 - Failure

2016-04-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1128/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5980 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:04 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:51 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.089 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:55 h
[INFO] Finished at: 2016-04-22T20:42:03+00:00
[INFO] Final Memory: 70M/549M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:54411,DS-0ac6c5d6-7a84-4916-b93a-b9ee0adf6064,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:49225,DS-618894c8-60a1-4d53-a0be-1a47812dca1f,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:49225,DS-618894c8-60a1-4d53-a0be-1a47812dca1f,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:54411,DS-0ac6c5d6-7a84-4916-b93a-b9ee0adf6064,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:54411,DS-0ac6c5d6-7a84-4916-b93a-b9ee0adf6064,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:49225,DS-618894c8-60a1-4d53-a0be-1a47812dca1f,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:49225,DS-618894c8-60a1-4d53-a0be-1a47812dca1f,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:54411,DS-0ac6c5d6-7a84-4916-b93a-b9ee0adf6064,DISK]]).
 The current failed datanode 

Hadoop-Hdfs-trunk - Build # 3057 - Still Failing

2016-04-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3057/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8451 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:12 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:35 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.078 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:39 h
[INFO] Finished at: 2016-04-22T20:32:21+00:00
[INFO] Final Memory: 57M/720M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithKeytabs

Error Message:
test timed out after 30 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:705)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:1098)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.access$000(TestBalancer.java:125)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume(TestFsDatasetImpl.java:683)




Build failed in Jenkins: Hadoop-Hdfs-trunk #3057

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-9555. LazyPersistFileScrubber should still sleep if there are

--
[...truncated 8258 lines...]
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.58 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.603 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.341 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.148 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.462 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.171 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.178 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.987 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 165.481 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.098 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.886 sec - in 
org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.929 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.523 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.332 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.646 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.037 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.152 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.556 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.043 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.264 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.994 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.551 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.208 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.089 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.021 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.713 sec - in 
org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator

[jira] [Resolved] (HDFS-9030) libwebhdfs lacks headers, documentation; not part of mvn package

2016-04-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9030.

Resolution: Won't Fix

libwebhdfs has been removed. closing as won't fix.

> libwebhdfs lacks headers, documentation; not part of mvn package
> 
>
> Key: HDFS-9030
> URL: https://issues.apache.org/jira/browse/HDFS-9030
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> This library is useless without header files to include and documentation on 
> how to use it.  Both appear to be missing from the mvn package and site 
> documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #3056

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12891. S3AFileSystem should configure Multipart Copy threshold

--
[...truncated 6330 lines...]
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.544 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.587 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.603 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.69 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.669 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.291 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.623 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 166.328 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.276 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.441 sec - in 
org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.502 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.421 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.315 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.199 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.317 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.549 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.407 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.91 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.097 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.716 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.467 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.979 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.999 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.024 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.199 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.178 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running 

[jira] [Reopened] (HDFS-9328) Formalize coding standards for libhdfs++

2016-04-22 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer reopened HDFS-9328:
---

Reopening this for a couple reasons

1) Nobody is using clang-format; mostly because it causes tons of whitespace 
issues when merging. It also seems to apply arbitrary indentation rules to 
lambdas which renders them unreadable (cant tell where the capture list ends 
and arguments begin).

2) The strict "no exceptions ever" rule isn't reasonable; it's even more strict 
than Google's rules about exceptions. They say you can catch if a lib says 
something might throw.  Asio and the URI parsing library like to throw 
exceptions that we need to catch.  We have a lot of code that isn't exception 
safe (though that's been improving a lot lately: HDFS-9712 and similar 
refactors). Letting an exception bubble out to a c++ caller is going to drop 
things on the floor as the stack unwinds.  C++ was designed to safely manage 
resources in systems level software and exceptions are an integral part of 
that.  I'm not saying we should start throwing exceptions everywhere but 
pretending they don't exist isn't a solution either.  Had we been using an 
async library that didn't throw it would be possible to do without them.

> Formalize coding standards for libhdfs++
> 
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Fix For: HDFS-8707
>
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch, HDFS-9328.HDFS-8707.002.patch, 
> HDFS-9328.HDFS-8707.003.patch, HDFS-9328.HDFS-8707.004.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Slack channel for Hadoop developers

2016-04-22 Thread Tsuyoshi Ozawa
Hi,

I created slack channel for Hadoop community unofficially and experimentally:
https://hadoopdev.slack.com/

I know that there is IRC channel and it's good to log. However, Slack
is very also good tool to join easily and have a communication
interactively. It will be also useful to join following kind of meetup
remotely:
http://www.meetup.com/Hadoop-Contributors/events/230495682/?eventId=230495682

Please let me know if you find any trouble or problem.

To join the slack, please register your email address from here:
https://hadoopdev-invitation.herokuapp.com/

Thanks,
- Tsuyoshi


Hadoop-Hdfs-trunk - Build # 3055 - Still Failing

2016-04-22 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3055/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8506 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:01 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:30 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.063 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:34 h
[INFO] Finished at: 2016-04-22T08:45:15+00:00
[INFO] Final Memory: 60M/757M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-assembly-plugin:2.4:single (dist) on project 
hadoop-hdfs: Error reading assemblies: Error reading descriptor: hadoop-dist: 
invalid block type -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #3055

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[omalley] HDFS-9894. Add unsetStoragePolicy API to 
FileContext/AbstractFileSystem

[omalley] HADOOP-13011 - Clearly Document the Password Details for 
Keystore-based

--
[...truncated 8313 lines...]
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading RegularFileObject[/home/jenkins/jenkins-slavJDiff: 
finished (took 0s, not including scanning the source files).
  [javadoc] 
e/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/classes/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor$1.class]]
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1126

2016-04-22 Thread Apache Jenkins Server
See 

Changes:

[omalley] HDFS-9894. Add unsetStoragePolicy API to 
FileContext/AbstractFileSystem

[omalley] HADOOP-13011 - Clearly Document the Password Details for 
Keystore-based

--
[...truncated 4795 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.494 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestStoragePolicySummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.801 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestEditLogFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.432 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestMetaSave
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.569 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestMetaSave
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestXAttrFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.598 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestXAttrFeature
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestCommitBlockSynchronization
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.72 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestCommitBlockSynchronization
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.906 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSImageStorageInspector
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestTransferFsImage
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.471 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestTransferFsImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.994 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFileJournalManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.379 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.461 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestMalformedURLs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.282 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestMalformedURLs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.349 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeRpcServerMethods
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.917 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeRpcServerMethods
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNestedEncryptionZones
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 

[jira] [Created] (HDFS-10324) Trash directory in an encryption zone should be pre-created with sticky bit

2016-04-22 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-10324:
--

 Summary: Trash directory in an encryption zone should be 
pre-created with sticky bit
 Key: HDFS-10324
 URL: https://issues.apache.org/jira/browse/HDFS-10324
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: encryption
Affects Versions: 2.8.0
 Environment: CDH5.7.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


We encountered a bug in HDFS-8831:
After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
subdirectory within the encryption zone.

However, if this .Trash subdirectory is not created beforehand, it will be 
created and owned by the first user who deleted a file, with permission 
drwx--. This creates a serious bug because any other non-privileged user 
will not be able to delete any files within the encryption zone, because they 
do not have the permission to move directories within the trash directory.

We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)