[jira] [Resolved] (HDFS-10315) Fix TestRetryCacheWithHA and TestNamenodeRetryCache failures

2016-04-19 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HDFS-10315.
-
Resolution: Duplicate

> Fix TestRetryCacheWithHA and TestNamenodeRetryCache failures
> 
>
> Key: HDFS-10315
> URL: https://issues.apache.org/jira/browse/HDFS-10315
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
>
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild
> Error Message:
> expected:<25> but was:<26>
> Stack Trace:
> java.lang.AssertionError: expected:<25> but was:<26>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild(TestNamenodeRetryCache.java:419)
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN
> Error Message:
> expected:<25> but was:<26>
> Stack Trace:
> java.lang.AssertionError: expected:<25> but was:<26>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN(TestRetryCacheWithHA.java:169
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10316) revisit corrupt replicas count

2016-04-19 Thread Walter Su (JIRA)
Walter Su created HDFS-10316:


 Summary: revisit corrupt replicas count
 Key: HDFS-10316
 URL: https://issues.apache.org/jira/browse/HDFS-10316
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Walter Su


A DN has 4 types of storages:
1. NORMAL
2. READ_ONLY
3. FAILED
4. (missing/pruned)

blocksMap.numNodes(blk) counts 1,2,3
blocksMap.getStorages(blk) counts 1,2,3

countNodes(blk).corruptReplicas() counts 1,2
corruptReplicas counts 1,2,3,4. Because findAndMarkBlockAsCorrupt(..) supports 
adding blk to the map even if the storage is not found.

The inconsistency causes bugs like HDFS-9958.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10315) Fix TestRetryCacheWithHA and TestNamenodeRetryCache failures

2016-04-19 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-10315:
---

 Summary: Fix TestRetryCacheWithHA and TestNamenodeRetryCache 
failures
 Key: HDFS-10315
 URL: https://issues.apache.org/jira/browse/HDFS-10315
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Fix For: 2.8.0


{noformat}
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild

Error Message:
expected:<25> but was:<26>

Stack Trace:
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild(TestNamenodeRetryCache.java:419)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN

Error Message:
expected:<25> but was:<26>

Stack Trace:
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN(TestRetryCacheWithHA.java:169
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 3047 - Still Failing

2016-04-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3047/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5436 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:09 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:29 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.106 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:34 h
[INFO] Finished at: 2016-04-19T22:24:17+00:00
[INFO] Final Memory: 56M/537M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
7 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:653)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade.testDatanodeRUwithRegularUpgrade

Error Message:
File /testDatanodeRUwithRegularUpgrade.03.dat could only be replicated to 0 
nodes instead of minReplication (=1).  There are 0 datanode(s) running and no 
node(s) are excluded in this operation.
 at 

Build failed in Jenkins: Hadoop-Hdfs-trunk #3047

2016-04-19 Thread Apache Jenkins Server
See 

Changes:

[arp] HDFS-10264. Logging improvements in FSImageFormatProtobuf.Saver.

--
[...truncated 5243 lines...]
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.34 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.672 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.763 sec - in 
org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.029 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.126 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.644 sec - in 
org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.205 sec - 
in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 29.832 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.698 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.182 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.191 sec - in 
org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.825 sec - in 
org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.071 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter
Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.868 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.235 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.699 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.255 sec - in 
org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.429 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.427 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.44 sec - in 
org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.818 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.732 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.61 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.082 sec - in 
org.apache.hadoop.cli.TestDeleteCLI
Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.78 sec - in 
org.apache.hadoop.cli.TestCacheAdminCLI
Running org.apache.hadoop.cli.TestAclCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.905 sec - in 
org.apache.hadoop.cli.TestAclCLI
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, 

[jira] [Created] (HDFS-10314) Propose a new tool that wraps around distcp to "restore" changes on target cluster

2016-04-19 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10314:


 Summary: Propose a new tool that wraps around distcp to "restore" 
changes on target cluster
 Key: HDFS-10314
 URL: https://issues.apache.org/jira/browse/HDFS-10314
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang


HDFS-9820 proposed adding -rdiff switch to distcp, as a reversed operation of 
-diff switch. 

Upon discussion with [~jingzhao], we will introduce a new tool that wraps 
around distcp to achieve the same purpose.

I'm thinking about calling the new tool "rsync", similar to unix/linux command 
"rsync". The "r" here means remote.

The syntax that simulate -rdiff behavior proposed in HDFS-9820 is
 {code}  
rsync  
Pcode}
This command ensure   is newer than .

I think, In the future, we can add another command to have the functionality of 
-diff switch of distcp.
 {code}  
sync  
Pcode}
where   must be older than .

Thanks [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10313) Distcp does not check the order of snapshot names passed to -diff

2016-04-19 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10313:


 Summary: Distcp does not check the order of snapshot names passed 
to -diff
 Key: HDFS-10313
 URL: https://issues.apache.org/jira/browse/HDFS-10313
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Reporter: Yongjun Zhang


This jira is to propose adding a check to distcp, when {{-diff s1 s2}} is 
passed, we need to ensure that s2 is newer than s1, otherwise, abort with a 
informative error message.

This is the result of my offline discussion with [~jingzhao] on HDFS-9820. 
Thanks Jing.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10312) Large block reports may fail to decode at NameNode due to 64 MB protobuf maximum length restriction.

2016-04-19 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-10312:


 Summary: Large block reports may fail to decode at NameNode due to 
64 MB protobuf maximum length restriction.
 Key: HDFS-10312
 URL: https://issues.apache.org/jira/browse/HDFS-10312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Chris Nauroth
Assignee: Chris Nauroth


Our RPC server caps the maximum size of incoming messages at 64 MB by default.  
For exceptional circumstances, this can be uptuned using 
{{ipc.maximum.data.length}}.  However, for block reports, there is still an 
internal maximum length restriction of 64 MB enforced by protobuf.  (Sample 
stack trace to follow in comments.)  This issue proposes to apply the same 
override to our block list decoding, so that large block reports can proceed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10311) libhdfs++: DatanodeConnection::Cancel should not delete the underlying socket

2016-04-19 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10311:
--

 Summary: libhdfs++: DatanodeConnection::Cancel should not delete 
the underlying socket
 Key: HDFS-10311
 URL: https://issues.apache.org/jira/browse/HDFS-10311
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


DataNodeConnectionImpl calls reset on the unique_ptr that references the 
underlying asio::tcp::socket.  If this happens after the continuation pipeline 
checks the cancel state but before asio uses the socket it will segfault 
because unique_ptr::reset will explicitly change it's value to nullptr.

Cancel should only call shutdown() and close() on the socket but keep the 
instance of it alive.  The socket can probably also be turned into a member of 
DataNodeConnectionImpl to get rid of the unique pointer and simplify things a 
bit.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Change log level

2016-04-19 Thread Kun Ren
Hi All,

I compiled the source code, and used eclipse to remotely debug the code, I
want to see the Debug information from the log, so I changed the log level
for some classes, for example, I changed the FsShell's log level to
DEBUG(change it from http://localhost:50070/logLevel), then I add the
following test code in the FsShell.java:

LOG.debug("FsShell:main(), log leve=debug");
LOG.info("FsShell:main(), log leve=info");

I re-compiled the code, and remotely debug it, however I can see the output
"FsShell:main(), log leve=info", but can not see the LOG.debug line, looks
like the log level is still INFO, but I checked with
http://localhost:50070/logLevel, it shows that the level is DEBUG, do you
know why or how to enable debug and change log level to debug? Thanks so
much for your help.

By the way, I also tried to change the log4j.properties, but still not
working.

Best,
Kun


[jira] [Created] (HDFS-10310) libhdfs++: hdfsConnect needs timeout logic

2016-04-19 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10310:
--

 Summary: libhdfs++: hdfsConnect needs timeout logic
 Key: HDFS-10310
 URL: https://issues.apache.org/jira/browse/HDFS-10310
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


hdfsConnect will hang when it attempts to connect to a non-existent NN, right 
now the client has to wait on a TCP timeout to get unstuck.  Adding some 
reasonable timeout on FileSystem::Connect will fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10309) HDFS Balancer doesn't honor dfs.blocksize value defined with suffix k(kilo), m(mega), g(giga)

2016-04-19 Thread Amit Anand (JIRA)
Amit Anand created HDFS-10309:
-

 Summary: HDFS Balancer doesn't honor dfs.blocksize value defined 
with suffix k(kilo), m(mega), g(giga)
 Key: HDFS-10309
 URL: https://issues.apache.org/jira/browse/HDFS-10309
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover
Affects Versions: 2.8.0
Reporter: Amit Anand
Assignee: Amit Anand


While running HDFS Balancer I get error given below when {{dfs.blockSize}} is 
defined with suffix {{k(kilo), m(mega), g(giga)}}. In my deployment 
{{dfs.blocksize}} is set to {{128m}}. 

{code}
hdfs@bcpc-vm1:/home/ubuntu$ hdfs balancer
16/04/19 08:49:51 INFO balancer.Balancer: namenodes  = [hdfs://Test-Laptop]
16/04/19 08:49:51 INFO balancer.Balancer: parameters = 
Balancer.BalancerParameters [BalancingPolicy.Node, threshold = 10.0, max idle 
iteration = 5, #excluded nodes = 0, #included nodes = 0, #source 
nodes = 0, #blockpools = 0, run during upgrade = false]
16/04/19 08:49:51 INFO balancer.Balancer: included nodes = []
16/04/19 08:49:51 INFO balancer.Balancer: excluded nodes = []
16/04/19 08:49:51 INFO balancer.Balancer: source nodes = []
Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
Bytes Being Moved
16/04/19 08:49:52 INFO balancer.KeyManager: Block token params received from 
NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
16/04/19 08:49:52 INFO block.BlockTokenSecretManager: Setting block keys
16/04/19 08:49:52 INFO balancer.KeyManager: Update block keys every 2hrs, 
30mins, 0sec
16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.movedWinWidth = 540 
(default=540)
16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.moverThreads = 1000 
(default=1000)
16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.dispatcherThreads = 200 
(default=200)
16/04/19 08:49:52 INFO balancer.Balancer: 
dfs.datanode.balance.max.concurrent.moves = 5 (default=5)
16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.getBlocks.size = 
2147483648 (default=2147483648)
16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.getBlocks.min-block-size 
= 10485760 (default=10485760)
16/04/19 08:49:52 INFO block.BlockTokenSecretManager: Setting block keys
16/04/19 08:49:52 INFO balancer.Balancer: dfs.balancer.max-size-to-move = 
10737418240 (default=10737418240)
Apr 19, 2016 8:49:52 AM  Balancing took 1.408 seconds
16/04/19 08:49:52 ERROR balancer.Balancer: Exiting balancer due an exception
java.lang.NumberFormatException: For input string: "128m"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1311)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.getLong(Balancer.java:221)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.(Balancer.java:281)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:660)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer$Cli.run(Balancer.java:774)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.main(Balancer.java:903)
{code}

However, the workaround for this is to run {{hdfs balancer}} with passing 
numeric value for {{dfs.blocksize}}

{code}
hdfs balancer -Ddfs.blocksize=134217728
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 3046 - Still Failing

2016-04-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3046/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5359 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:03 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:28 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.069 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:32 h
[INFO] Finished at: 2016-04-19T16:11:33+00:00
[INFO] Final Memory: 56M/735M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild

Error Message:
expected:<25> but was:<26>

Stack Trace:
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild(TestNamenodeRetryCache.java:419)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN

Error Message:
expected:<25> but was:<26>

Stack Trace:
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN(TestRetryCacheWithHA.java:169)




Build failed in Jenkins: Hadoop-Hdfs-trunk #3046

2016-04-19 Thread Apache Jenkins Server
See 

Changes:

[waltersu4549] HDFS-10291 TestShortCircuitLocalRead failing (stevel)

[waltersu4549] HDFS-10284.

[waltersu4549] HDFS-9744. TestDirectoryScanner#testThrottling occasionally time 
out

--
[...truncated 5166 lines...]
Running org.apache.hadoop.hdfs.server.mover.TestStorageMover
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 184.726 sec - 
in org.apache.hadoop.hdfs.server.mover.TestStorageMover
Running org.apache.hadoop.hdfs.server.mover.TestMover
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.188 sec - 
in org.apache.hadoop.hdfs.server.mover.TestMover
Running org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.689 sec - in 
org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.167 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.746 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.716 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.807 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.892 sec - in 
org.apache.hadoop.hdfs.TestPipelines
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.574 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.362 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.754 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.872 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.988 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.333 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.137 sec - in 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.29 sec - in 
org.apache.hadoop.hdfs.TestPersistBlocks
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.624 sec - in 
org.apache.hadoop.hdfs.TestFSInputChecker
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.326 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.151 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.322 sec - in 
org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.843 sec - in 
org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.241 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.796 sec - in 
org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.72 sec - in 
org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.479 sec - in 
org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1115

2016-04-19 Thread Apache Jenkins Server
See 

Changes:

[kai.zheng] HADOOP-12924. Configure raw erasure coders for supported codecs.

--
[...truncated 5853 lines...]
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.232 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.328 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.028 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.897 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.193 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.511 sec - 
in org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.507 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.112 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.493 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.303 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 127.518 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.324 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.208 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.975 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.25 sec - in 
org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.423 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM 

Hadoop-Hdfs-trunk-Java8 - Build # 1115 - Still Failing

2016-04-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1115/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6046 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:11 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:13 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.078 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:17 h
[INFO] Finished at: 2016-04-19T13:12:39+00:00
[INFO] Final Memory: 56M/482M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
8 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork.testDatanodeReRegistration

Error Message:
Expected invalidate blocks to be the number of DNs expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: Expected invalidate blocks to be the number of DNs 
expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork.testDatanodeReRegistration(TestComputeInvalidateWork.java:161)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume(TestFsDatasetImpl.java:683)


FAILED:  
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitCache.testDataXceiverCleansUpSlotsOnFailure

Error Message:
expected:<1> but was:<2>

Stack Trace:

Build failed in Jenkins: Hadoop-Hdfs-trunk #3045

2016-04-19 Thread Apache Jenkins Server
See 

Changes:

[kai.zheng] HADOOP-12924. Configure raw erasure coders for supported codecs.

--
[...truncated 5214 lines...]
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.326 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.896 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.701 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.11 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestPipelines
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.723 sec - in 
org.apache.hadoop.hdfs.TestPipelines
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.562 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.246 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.691 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.444 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.789 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.731 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.005 sec - in 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
Running org.apache.hadoop.hdfs.TestPersistBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.749 sec - in 
org.apache.hadoop.hdfs.TestPersistBlocks
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.481 sec - in 
org.apache.hadoop.hdfs.TestFSInputChecker
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.379 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.062 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.126 sec - in 
org.apache.hadoop.fs.TestFcHdfsPermission
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 36, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 5.764 sec - in 
org.apache.hadoop.fs.TestGlobPaths
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.262 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.614 sec - in 
org.apache.hadoop.fs.TestSymlinkHdfsDisable
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.412 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.178 sec - in 
org.apache.hadoop.fs.TestUrlStreamHandler
Running org.apache.hadoop.fs.TestUnbuffer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.829 sec - in 
org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.998 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 15.576 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.197 sec - in 
org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 69, Failures: 0, Errors: 0, 

Hadoop-Hdfs-trunk - Build # 3045 - Still Failing

2016-04-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3045/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5407 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:27 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:36 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.067 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:41 h
[INFO] Finished at: 2016-04-19T12:36:36+00:00
[INFO] Final Memory: 56M/559M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
6 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestHFlush.testHFlushInterrupted

Error Message:
null

Stack Trace:
java.nio.channels.ClosedByInterruptException: null
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496)
at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:653)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache.testRetryCacheRebuild

Error Message:
expected:<25> but was:<26>

Stack Trace:
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at 

[jira] [Created] (HDFS-10308) TestRetryCacheWithHA#testRetryCacheOnStandbyNN failing

2016-04-19 Thread Rakesh R (JIRA)
Rakesh R created HDFS-10308:
---

 Summary: TestRetryCacheWithHA#testRetryCacheOnStandbyNN failing
 Key: HDFS-10308
 URL: https://issues.apache.org/jira/browse/HDFS-10308
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Rakesh R
Assignee: Rakesh R


Its failing with following exception
{code}
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN(TestRetryCacheWithHA.java:169)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10307) Fix a bug in TestShortCircuitLocalRead

2016-04-19 Thread Li Bo (JIRA)
Li Bo created HDFS-10307:


 Summary: Fix a bug in TestShortCircuitLocalRead
 Key: HDFS-10307
 URL: https://issues.apache.org/jira/browse/HDFS-10307
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Li Bo
Assignee: Li Bo


Unit tests testLocalReadFallback ,testLocalReadLegacy, testSmallFileLocalRead 
in TestShortCircuitLocalRead throws the following exception:

java.lang.IndexOutOfBoundsException: Requested more bytes than destination 
buffer size
at 
org.apache.hadoop.fs.FSInputStream.validatePositionedReadArgs(FSInputStream.java:107)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:975)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.checkFileContent(TestShortCircuitLocalRead.java:157)
at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitReadImpl(TestShortCircuitLocalRead.java:286)
at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitReadLegacy(TestShortCircuitLocalRead.java:235)
at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testLocalReadFallback(TestShortCircuitLocalRead.java:327)






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 3044 - Still Failing

2016-04-19 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3044/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5064 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:04 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:40 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.067 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:44 h
[INFO] Finished at: 2016-04-19T06:06:30+00:00
[INFO] Final Memory: 76M/681M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter4353784598703294486.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire4929042604624639230tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_3074722669995088267531tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN

Error Message:
expected:<25> but was:<26>

Stack Trace:
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN(TestRetryCacheWithHA.java:169)


FAILED:  
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testSmallFileLocalRead

Error Message:
Requested more bytes than destination buffer size

Stack Trace:

Build failed in Jenkins: Hadoop-Hdfs-trunk #3044

2016-04-19 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-10306. SafeModeMonitor should not leave safe mode if name system is

--
[...truncated 4871 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.165 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 83.034 sec - 
in org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
Running org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.877 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.04 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.721 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.426 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.436 sec - 
in org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.925 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 92.264 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
testRetryCacheOnStandbyNN(org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA)
  Time elapsed: 4.832 sec  <<< FAILURE!
java.lang.AssertionError: expected:<25> but was:<26>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testRetryCacheOnStandbyNN(TestRetryCacheWithHA.java:169)

Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.894 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.911 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.189 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.316 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.514 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer
Running org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.192 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.266 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.394 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
Running org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.371 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck
Running 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.149 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAConfiguration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.927 sec - in