Manno15 commented on issue #161:
URL: 
https://github.com/apache/accumulo-testing/issues/161#issuecomment-941207743


   I also ran into this exception while testing: 
   
   <details>
   <pre>
   021-10-12T13:11:17,902 [hdfs.DataStreamer] WARN : Exception for 
BP-526297122-127.0.0.1-1634056707837:blk_1073741832_1008
   java.io.EOFException: Unexpected EOF while trying to read response from 
server
           at 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:521)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1088)
 ~[hadoop-client-api-3.3.0.jar:?]
   2021-10-12T13:11:17,952 [hdfs.DFSClient] ERROR: Failed to close file: 
/accumulo/wal/groot+9997/96639099-bf3f-4b08-9b52-6895177584ba with inode: 16408
   java.io.EOFException: End of File Exception between local host is: 
"groot/127.0.0.1"; destination host is: "groot":8020; : java.io.EOFException; 
For more details see:  http://wiki.apache.org/hadoop/EOFException
           at 
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
~[?:?]
           at 
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 ~[?:?]
           at 
jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:?]
           at java.lang.reflect.Constructor.newInstance(Constructor.java:490) 
~[?:?]
           at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:837) 
~[hadoop-client-api-3.3.0.jar:?]
           at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:791) 
~[hadoop-client-api-3.3.0.jar:?]
           at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1566) 
~[hadoop-client-api-3.3.0.jar:?]
           at org.apache.hadoop.ipc.Client.call(Client.java:1508) 
~[hadoop-client-api-3.3.0.jar:?]
           at org.apache.hadoop.ipc.Client.call(Client.java:1405) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:234)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:119)
 ~[hadoop-client-api-3.3.0.jar:?]
           at com.sun.proxy.$Proxy33.complete(Unknown Source) ~[?:?]
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:570)
 ~[hadoop-client-api-3.3.0.jar:?]
           at jdk.internal.reflect.GeneratedMethodAccessor11.invoke(Unknown 
Source) ~[?:?]
           at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
           at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
 ~[hadoop-client-api-3.3.0.jar:?]
           at com.sun.proxy.$Proxy34.complete(Unknown Source) ~[?:?]
           at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:957) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:914) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:897) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:852) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:633) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:670) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1497)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:3546) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:3563) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
           at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
~[?:?]
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
~[?:?]
           at java.lang.Thread.run(Thread.java:829) [?:?]
   Caused by: java.io.EOFException
           at java.io.DataInputStream.readInt(DataInputStream.java:397) ~[?:?]
           at 
org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1881) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1191) 
~[hadoop-client-api-3.3.0.jar:?]
           at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1087) 
~[hadoop-client-api-3.3.0.jar:?]
   2021-10-12T13:11:17,953 [hdfs.DFSClient] ERROR: Failed to close file: 
/accumulo/wal/groot+9997/b1288f03-b5e1-47fb-a4eb-5623e5b161b1 with inode: 16409
   java.io.IOException: All datanodes 
[DatanodeInfoWithStorage[127.0.0.1:9866,DS-8435dc9c-94f0-4d60-b362-817ae5f2c38d,DISK]]
 are bad. Aborting...
           at 
org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1561) 
~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1495)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
 ~[hadoop-client-api-3.3.0.jar:?]
           at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
 ~[hadoop-client-api-3.3.0.jar:?]
           at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667) 
~[hadoop-client-api-3.3.0.jar:?]
   
   
   </pre>
   </details
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to