[ 
https://issues.apache.org/jira/browse/HBASE-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12931917#action_12931917
 ] 

Jonathan Gray commented on HBASE-3234:
--------------------------------------

This is snippet from attached log that seems to be the first failure...

{noformat}
2010-11-14 05:00:08,938 DEBUG [DataStreamer for file 
/user/stack/.logs/pynchon-432.lan,63324,1289739567228/192.168.1.69%3A63324.1289739568182
 block blk_-4366233055961732763_1007] 
hdfs.DFSClient$DFSOutputStream$DataStreamer(2429): DataStreamer block 
blk_-4366233055961732763_1007 wrote packet seqno:1 size:795 offsetInBlock:0 
lastPacketInBlock:false
2010-11-14 05:00:08,938 DEBUG 
[org.apache.hadoop.hdfs.server.datanode.dataxcei...@54cee271] 
datanode.BlockReceiver(393): Receiving one packet for block 
blk_-4366233055961732763_1007 of length 774 seqno 1 offsetInBlock 0 
lastPacketInBlock false
2010-11-14 05:00:08,938 DEBUG 
[org.apache.hadoop.hdfs.server.datanode.dataxcei...@54cee271] 
datanode.BlockReceiver$PacketResponder(737): PacketResponder 0 adding seqno 1 
to ack queue.
2010-11-14 05:00:08,938 DEBUG [PacketResponder 0 for Block 
blk_-4366233055961732763_1007] datanode.BlockReceiver$PacketResponder(891): 
PacketResponder 0 for block blk_-4366233055961732763_1007 responded an ack: 
Replies for seqno 1 are SUCCESS
2010-11-14 05:00:08,938 DEBUG [PacketResponder 0 for Block 
blk_-4366233055961732763_1007] datanode.BlockReceiver$PacketResponder(789): 
PacketResponder 0 seqno = -2 for block blk_-4366233055961732763_1007 waiting 
for local datanode to finish write.
2010-11-14 05:00:08,938 DEBUG [ResponseProcessor for block 
blk_-4366233055961732763_1007] 
hdfs.DFSClient$DFSOutputStream$ResponseProcessor(2534): DFSClient Replies for 
seqno 0 are FAILED
2010-11-14 05:00:08,939 WARN  [ResponseProcessor for block 
blk_-4366233055961732763_1007] 
hdfs.DFSClient$DFSOutputStream$ResponseProcessor(2580): DFSOutputStream 
ResponseProcessor exception  for block 
blk_-4366233055961732763_1007java.io.IOException: Bad response 1 for block 
blk_-4366233055961732763_1007 from datanode 127.0.0.1:63316
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2542)

2010-11-14 05:00:08,939 WARN  [DataStreamer for file 
/user/stack/.logs/pynchon-432.lan,63324,1289739567228/192.168.1.69%3A63324.1289739568182
 block blk_-4366233055961732763_1007] hdfs.DFSClient$DFSOutputStream(2616): 
Error Recovery for block blk_-4366233055961732763_1007 bad datanode[0] 
127.0.0.1:63316
2010-11-14 05:00:08,941 INFO  
[org.apache.hadoop.hdfs.server.datanode.dataxcei...@54cee271] 
datanode.BlockReceiver(565): Exception in receiveBlock for block 
blk_-4366233055961732763_1007 java.io.EOFException: while trying to read 795 
bytes
2010-11-14 05:00:08,941 INFO  [PacketResponder 0 for Block 
blk_-4366233055961732763_1007] datanode.BlockReceiver$PacketResponder(844): 
PacketResponder blk_-4366233055961732763_1007 0 : Thread is interrupted.
2010-11-14 05:00:08,941 INFO  [PacketResponder 0 for Block 
blk_-4366233055961732763_1007] datanode.BlockReceiver$PacketResponder(907): 
PacketResponder 0 for block blk_-4366233055961732763_1007 terminating
2010-11-14 05:00:08,941 WARN  
[RegionServer:0;pynchon-432.lan,63324,1289739567228.logSyncer] 
hdfs.DFSClient$DFSOutputStream(3293): Error while syncing
java.io.IOException: All datanodes 127.0.0.1:63316 are bad. Aborting...
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2666)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:2157)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2356)
2010-11-14 05:00:08,942 INFO  
[org.apache.hadoop.hdfs.server.datanode.dataxcei...@54cee271] 
datanode.DataXceiver(377): writeBlock blk_-4366233055961732763_1007 received 
exception java.io.EOFException: while trying to read 795 bytes
2010-11-14 05:00:08,943 FATAL 
[RegionServer:0;pynchon-432.lan,63324,1289739567228.logSyncer] wal.HLog(1083): 
Could not append. Requesting close of hlog
java.io.IOException: Reflection
        at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:147)
        at org.apache.hadoop.hbase.regionserver.wal.HLog.hflush(HLog.java:1059)
        at 
org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:983)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:145)
        ... 2 more
Caused by: java.io.IOException: All datanodes 127.0.0.1:63316 are bad. 
Aborting...
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2666)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:2157)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2356)
{noformat}

> hdfs-724 "breaks" TestHBaseTestingUtility multiClusters
> -------------------------------------------------------
>
>                 Key: HBASE-3234
>                 URL: https://issues.apache.org/jira/browse/HBASE-3234
>             Project: HBase
>          Issue Type: Bug
>            Reporter: stack
>            Priority: Critical
>             Fix For: 0.90.0
>
>         Attachments: 
> org.apache.hadoop.hbase.TestHBaseTestingUtility-output.txt, 
> org.apache.hadoop.hbase.TestHBaseTestingUtility.txt
>
>
> We upgraded our hadoop jar in TRUNK to latest on 0.20-append branch.  
> TestHBaseTestingUtility started failing reliably.  If I back out hdfs-724, 
> the test passes again.  This issue is about figuring whats up here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to