[jira] [Updated] (HBASE-16349) TestClusterId may hang during cluster shutdown

2016-09-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16349:
---
   Resolution: Fixed
 Assignee: Ted Yu
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   2.0.0
   Status: Resolved  (was: Patch Available)

The test passed in recent builds.

Will reopen if it fails in shutdown.

Thanks for the review, Heng.

> TestClusterId may hang during cluster shutdown
> --
>
> Key: HBASE-16349
> URL: https://issues.apache.org/jira/browse/HBASE-16349
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16349.branch-1.v1.txt, 16349.v1.txt
>
>
> I was running TestClusterId on branch-1 where I observed the test hang during 
> test tearDown().
> {code}
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1415): Closing hbase:meta,,1.1588230740: disabling 
> compactions & flushes
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1442): Updates disabled for region 
> hbase:meta,,1.1588230740
> 2016-08-03 11:36:39,600 INFO  [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(2253): Flushing 1/1 column families, memstore=232 B
> 2016-08-03 11:36:39,601 WARN  [RS_OPEN_META-cn012:49371-0.append-pool17-t1] 
> wal.FSHLog$RingBufferEventHandler(1900): Append sequenceId=8, requesting roll 
> of WAL
> java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,602 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2085): ABORTING region server 
> cn012.l42scl.hortonworks.com,49371,1470249187586: Unrecoverable   
> exception while closing region hbase:meta,,1.1588230740, still finishing close
> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append 
> sequenceId=8, requesting roll of WAL
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1902)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1754)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1676)
>   at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,603 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2093): RegionServer abort: loaded coprocessors 
> are: [org.apache.hadoop.hbase.coprocessor.   MultiRowMutationEndpoint]
> {code}
> This led to rst.join() hanging:
> {code}
> "RS:0;cn012:49371" #648 prio=5 os_prio=0 tid=0x7fdab24b5000 nid=0x621a 
> waiting on condition [0x7fd785fe]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1326)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1312)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1082)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16349) TestClusterId may hang during cluster shutdown

2016-09-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16349:
---
Issue Type: Test  (was: Bug)

> TestClusterId may hang during cluster shutdown
> --
>
> Key: HBASE-16349
> URL: https://issues.apache.org/jira/browse/HBASE-16349
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16349.branch-1.v1.txt, 16349.v1.txt
>
>
> I was running TestClusterId on branch-1 where I observed the test hang during 
> test tearDown().
> {code}
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1415): Closing hbase:meta,,1.1588230740: disabling 
> compactions & flushes
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1442): Updates disabled for region 
> hbase:meta,,1.1588230740
> 2016-08-03 11:36:39,600 INFO  [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(2253): Flushing 1/1 column families, memstore=232 B
> 2016-08-03 11:36:39,601 WARN  [RS_OPEN_META-cn012:49371-0.append-pool17-t1] 
> wal.FSHLog$RingBufferEventHandler(1900): Append sequenceId=8, requesting roll 
> of WAL
> java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,602 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2085): ABORTING region server 
> cn012.l42scl.hortonworks.com,49371,1470249187586: Unrecoverable   
> exception while closing region hbase:meta,,1.1588230740, still finishing close
> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append 
> sequenceId=8, requesting roll of WAL
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1902)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1754)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1676)
>   at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,603 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2093): RegionServer abort: loaded coprocessors 
> are: [org.apache.hadoop.hbase.coprocessor.   MultiRowMutationEndpoint]
> {code}
> This led to rst.join() hanging:
> {code}
> "RS:0;cn012:49371" #648 prio=5 os_prio=0 tid=0x7fdab24b5000 nid=0x621a 
> waiting on condition [0x7fd785fe]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1326)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1312)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1082)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16349) TestClusterId may hang during cluster shutdown

2016-09-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16349:
---
Attachment: 16349.v1.txt

> TestClusterId may hang during cluster shutdown
> --
>
> Key: HBASE-16349
> URL: https://issues.apache.org/jira/browse/HBASE-16349
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 16349.branch-1.v1.txt, 16349.v1.txt
>
>
> I was running TestClusterId on branch-1 where I observed the test hang during 
> test tearDown().
> {code}
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1415): Closing hbase:meta,,1.1588230740: disabling 
> compactions & flushes
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1442): Updates disabled for region 
> hbase:meta,,1.1588230740
> 2016-08-03 11:36:39,600 INFO  [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(2253): Flushing 1/1 column families, memstore=232 B
> 2016-08-03 11:36:39,601 WARN  [RS_OPEN_META-cn012:49371-0.append-pool17-t1] 
> wal.FSHLog$RingBufferEventHandler(1900): Append sequenceId=8, requesting roll 
> of WAL
> java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,602 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2085): ABORTING region server 
> cn012.l42scl.hortonworks.com,49371,1470249187586: Unrecoverable   
> exception while closing region hbase:meta,,1.1588230740, still finishing close
> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append 
> sequenceId=8, requesting roll of WAL
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1902)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1754)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1676)
>   at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,603 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2093): RegionServer abort: loaded coprocessors 
> are: [org.apache.hadoop.hbase.coprocessor.   MultiRowMutationEndpoint]
> {code}
> This led to rst.join() hanging:
> {code}
> "RS:0;cn012:49371" #648 prio=5 os_prio=0 tid=0x7fdab24b5000 nid=0x621a 
> waiting on condition [0x7fd785fe]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1326)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1312)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1082)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16349) TestClusterId may hang during cluster shutdown

2016-08-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16349:
---
Status: Patch Available  (was: Open)

> TestClusterId may hang during cluster shutdown
> --
>
> Key: HBASE-16349
> URL: https://issues.apache.org/jira/browse/HBASE-16349
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 16349.branch-1.v1.txt
>
>
> I was running TestClusterId on branch-1 where I observed the test hang during 
> test tearDown().
> {code}
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1415): Closing hbase:meta,,1.1588230740: disabling 
> compactions & flushes
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1442): Updates disabled for region 
> hbase:meta,,1.1588230740
> 2016-08-03 11:36:39,600 INFO  [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(2253): Flushing 1/1 column families, memstore=232 B
> 2016-08-03 11:36:39,601 WARN  [RS_OPEN_META-cn012:49371-0.append-pool17-t1] 
> wal.FSHLog$RingBufferEventHandler(1900): Append sequenceId=8, requesting roll 
> of WAL
> java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,602 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2085): ABORTING region server 
> cn012.l42scl.hortonworks.com,49371,1470249187586: Unrecoverable   
> exception while closing region hbase:meta,,1.1588230740, still finishing close
> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append 
> sequenceId=8, requesting roll of WAL
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1902)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1754)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1676)
>   at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,603 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2093): RegionServer abort: loaded coprocessors 
> are: [org.apache.hadoop.hbase.coprocessor.   MultiRowMutationEndpoint]
> {code}
> This led to rst.join() hanging:
> {code}
> "RS:0;cn012:49371" #648 prio=5 os_prio=0 tid=0x7fdab24b5000 nid=0x621a 
> waiting on condition [0x7fd785fe]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1326)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1312)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1082)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16349) TestClusterId may hang during cluster shutdown

2016-08-03 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16349:
---
Attachment: 16349.branch-1.v1.txt

> TestClusterId may hang during cluster shutdown
> --
>
> Key: HBASE-16349
> URL: https://issues.apache.org/jira/browse/HBASE-16349
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: 16349.branch-1.v1.txt
>
>
> I was running TestClusterId on branch-1 where I observed the test hang during 
> test tearDown().
> {code}
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1415): Closing hbase:meta,,1.1588230740: disabling 
> compactions & flushes
> 2016-08-03 11:36:39,600 DEBUG [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(1442): Updates disabled for region 
> hbase:meta,,1.1588230740
> 2016-08-03 11:36:39,600 INFO  [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegion(2253): Flushing 1/1 column families, memstore=232 B
> 2016-08-03 11:36:39,601 WARN  [RS_OPEN_META-cn012:49371-0.append-pool17-t1] 
> wal.FSHLog$RingBufferEventHandler(1900): Append sequenceId=8, requesting roll 
> of WAL
> java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,602 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2085): ABORTING region server 
> cn012.l42scl.hortonworks.com,49371,1470249187586: Unrecoverable   
> exception while closing region hbase:meta,,1.1588230740, still finishing close
> org.apache.hadoop.hbase.regionserver.wal.DamagedWALException: Append 
> sequenceId=8, requesting roll of WAL
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.append(FSHLog.java:1902)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1754)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog$RingBufferEventHandler.onEvent(FSHLog.java:1676)
>   at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: All datanodes 
> DatanodeInfoWithStorage[127.0.0.1:37765,DS-9870993e-fb98-45fc-b151-708f72aa02d2,DISK]
>  are bad. Aborting...
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1113)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:876)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:402)
> 2016-08-03 11:36:39,603 FATAL [RS_CLOSE_META-cn012:49371-0] 
> regionserver.HRegionServer(2093): RegionServer abort: loaded coprocessors 
> are: [org.apache.hadoop.hbase.coprocessor.   MultiRowMutationEndpoint]
> {code}
> This led to rst.join() hanging:
> {code}
> "RS:0;cn012:49371" #648 prio=5 os_prio=0 tid=0x7fdab24b5000 nid=0x621a 
> waiting on condition [0x7fd785fe]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.sleep(HRegionServer.java:1326)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.waitOnAllRegionsToClose(HRegionServer.java:1312)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1082)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)