[
https://issues.apache.org/jira/browse/HDFS-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14237735#comment-14237735
]
Tony Reix commented on HDFS-6311:
---------------------------------
I'm still having issue with this test, now with Hadoop 2.4.1 .
As shown below, the test gives different results depending on the environment.
Using IBM JVM or Oracle JVM does not seem to be the root cause.
It succeedes on Ubuntu/x86_64, but generates (2 different) errors on
RHEL7/x86_64 and on RHEL7/PPC64BE and Ubuntu/PPC64LE.
Are the Jenkins tests done with Ubuntu or RHEL7 ? If Ubuntu, that may explain
why you have a success. That would be worth to test in a different environment.
- RHEL7/x86_64 with Oracle JVM 1.7.0_67-b01 :
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.986 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestLargeBlock
testLargeBlockSize(org.apache.hadoop.hdfs.TestLargeBlock) Time elapsed:
5.658 sec <<< ERROR!
org.apache.hadoop.ipc.RemoteException: File
/tmp/TestLargeBlock/2147484160.dat could only be replicated to 0 nodes instead
of minReplication (=1). There are 1 datanode(s) running and no node(s) are
excluded in this operation.
at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1441)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
- RHEL7/x86_64 with IBM JVM 1.7 :
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 7.203 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestLargeBlock
testLargeBlockSize(org.apache.hadoop.hdfs.TestLargeBlock) Time elapsed:
6.611 sec <<< ERROR!
org.apache.hadoop.ipc.RemoteException: File
/tmp/TestLargeBlock/2147484160.dat could only be replicated to 0 nodes instead
of minReplication (=1). There are 1 datanode(s) running and no node(s) are
excluded in this operation.
at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1441)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
- Ubuntu/x86_64 with Oracle JVM 1.7.0_71-b14 :
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.537 sec -
in org.apache.hadoop.hdfs.TestLargeBlock
- Ubuntu/x86_64 with IBM JVM 1.7 :
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 155.485 sec -
in org.apache.hadoop.hdfs.TestLargeBlock
- RHEL7/PPC64 with IBM JVM 1.7 :
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 55.318 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestLargeBlock
testLargeBlockSize(org.apache.hadoop.hdfs.TestLargeBlock) Time elapsed: 55.024
sec <<< ERROR!
java.io.IOException: Premature EOF reading from
org.apache.hadoop.net.SocketInputStream@e0e03454
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:260)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:173)
at
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:138)
at
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:683)
at
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:739)
at
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:796)
- Ubuntu/PPC64LE with IBM JVM 1.7 :
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 71.982 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestLargeBlock
testLargeBlockSize(org.apache.hadoop.hdfs.TestLargeBlock) Time elapsed: 71.714
sec <<< ERROR!
java.io.IOException: Premature EOF reading from
org.apache.hadoop.net.SocketInputStream@50d72ee2
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:260)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:173)
at
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:138)
at
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:683)
at
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:739)
at
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:796)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
at java.io.DataInputStream.readFully(DataInputStream.java:207)
at
org.apache.hadoop.hdfs.TestLargeBlock.checkFullFile(TestLargeBlock.java:133)
at
org.apache.hadoop.hdfs.TestLargeBlock.runTest(TestLargeBlock.java:207)
at
org.apache.hadoop.hdfs.TestLargeBlock.testLargeBlockSize(TestLargeBlock.java:166)
> TestLargeBlock#testLargeBlockSize : File /tmp/TestLargeBlock/2147484160.dat
> could only be replicated to 0 nodes instead of minReplication (=1)
> ----------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-6311
> URL: https://issues.apache.org/jira/browse/HDFS-6311
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Affects Versions: 2.4.0, 2.4.1
> Environment: Virtual Box - Ubuntu 14.04 - x86_64
> Reporter: Tony Reix
>
> I'm testing HDFS 2.4.0
> Apache Hadoop HDFS : Tests run: 2650, Failures: 2, Errors:
> 2, Skipped: 99
> I have the following error each time I launch my tests (3 tries).
> Forking command line: /bin/sh -c cd
> /home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs &&
> /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Xmx1024m
> -XX:+HeapDumpOnOutOfMemoryError -jar
> /home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter2355654085353142996.jar
>
> /home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire983005167523288650tmp
>
> /home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4328161716955453811297tmp
> Running org.apache.hadoop.hdfs.TestLargeBlock
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.011 sec
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestLargeBlock
> testLargeBlockSize(org.apache.hadoop.hdfs.TestLargeBlock) Time elapsed:
> 15.549 sec <<< ERROR!
> org.apache.hadoop.ipc.RemoteException: File
> /tmp/TestLargeBlock/2147484160.dat could only be replicated to 0 nodes
> instead of minReplication (=1). There are 1 datanode(s) running and no
> node(s) are excluded in this operation.
> at
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1430)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2684)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2008)
> at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> at com.sun.proxy.$Proxy16.addBlock(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)