Ted Yu created HBASE-16349:
------------------------------
Summary: TestClusterId may hang during cluster shutdown
Key: HBASE-16349
URL: https://issues.apache.org/jira/browse/HBASE-16349
Project: HBase
Issue Type: Bug
Reporter: Ted Yu
Priority: Minor
I was running TestClusterId on branch-1 where I observed the test hang during
test tearDown().
{code}
2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0]
regionserver.HRegion(1415): Closing hbase:meta,,1.1588230740: disabling
compactions & flushes
2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0]
regionserver.HRegion(1442): Updates disabled for region hbase:meta,,1.1588230740
2016-08-03 11:36:39,600 INFO [RS_CLOSE_META-cn012:49371-0]
regionserver.HRegion(2253): Flushing 1/1 column families, memstore=232 B
2016-08-03 11:36:39,601 WARN [RS_OPEN_META-cn012:49371-0.append-pool17-t1]
wal.FSHLog$RingBufferEventHandler(1900): Append sequenceId=8, requesting roll
of WAL
java.io.IOException: All datanodes
DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
are bad. Aborting...
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
2016-08-03 11:36:39,602 FATAL [RS_CLOSE_META-cn012:49371-0]
regionserver.HRegionServer(2085): ABORTING region server
cn012.l42scl.hortonworks.com,49371,1470249187586: Unrecoverable exception
while closing region hbase:meta,,1.1588230740, still finishing close
org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append
sequenceId=8, requesting roll of WAL
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1902)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1754)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1676)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: All datanodes
DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
are bad. Aborting...
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
2016-08-03 11:36:39,603 FATAL [RS_CLOSE_META-cn012:49371-0]
regionserver.HRegionServer(2093): RegionServer abort: loaded coprocessors are:
[org.apache.hadoop.hbase.coprocessor. MultiRowMutationEndpoint]
{code}
This led to rst.join() hanging:
{code}
"RS:0;cn012:49371" #648 prio=5 os_prio=0 tid=0x00007fdab24b5000 nid=0x621a
waiting on condition [0x00007fd785fe0000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1326)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1312)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1082)
{code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)