[ 
https://issues.apache.org/jira/browse/HDDS-8797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17730754#comment-17730754
 ] 

Wei-Chiu Chuang commented on HDDS-8797:
---------------------------------------

This one looks like HDDS-7931.

> [hsync] HBase RegionServer input stream not shut down properly
> --------------------------------------------------------------
>
>                 Key: HDDS-8797
>                 URL: https://issues.apache.org/jira/browse/HDDS-8797
>             Project: Apache Ozone
>          Issue Type: Sub-task
>            Reporter: Wei-Chiu Chuang
>            Priority: Major
>
> Getting this error when HBase RS shut down (it shut down for some other 
> causes). It appears shutdown is not done properly.
> {noformat}
> 2023-06-08 11:00:40,942 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping HBase metrics 
> system...
> 2023-06-08 11:00:40,943 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: HBase metrics system 
> stopped.
> 2023-06-08 11:00:41,444 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: 
> Loaded properties from hadoop-metrics2.properties
> 2023-06-08 11:00:41,444 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 2023-06-08 11:00:41,444 INFO 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: HBase metrics system 
> started
> 2023-06-08 11:00:43,764 ERROR 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper: 
> *~*~*~ Channel ManagedChannelImpl{logId=5419, target=172.27.12.147:9859} was 
> not shutdown properly!!! ~*~*~*
>     Make sure to call shutdown()/shutdownNow() and wait until 
> awaitTermination() returns true.
> java.lang.RuntimeException: ManagedChannel allocation site
>         at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:93)
>         at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
>         at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
>         at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelImplBuilder.build(ManagedChannelImplBuilder.java:630)
>         at 
> org.apache.ratis.thirdparty.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:297)
>         at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.connectToDatanode(XceiverClientGrpc.java:188)
>         at 
> org.apache.hadoop.hdds.scm.XceiverClientGrpc.connect(XceiverClientGrpc.java:158)
>         at 
> org.apache.hadoop.hdds.scm.XceiverClientManager$2.call(XceiverClientManager.java:243)
>         at 
> org.apache.hadoop.hdds.scm.XceiverClientManager$2.call(XceiverClientManager.java:224)
>         at 
> org.apache.hadoop.ozone.shaded.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
>         at 
> org.apache.hadoop.ozone.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
>         at 
> org.apache.hadoop.ozone.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
>         at 
> org.apache.hadoop.ozone.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
>         at 
> org.apache.hadoop.ozone.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
>         at 
> org.apache.hadoop.ozone.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
>         at 
> org.apache.hadoop.ozone.shaded.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
>         at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.getClient(XceiverClientManager.java:224)
>         at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:168)
>         at 
> org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClientForReadData(XceiverClientManager.java:157)
>         at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.acquireClient(BlockInputStream.java:284)
>         at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.getChunkInfos(BlockInputStream.java:237)
>         at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.initialize(BlockInputStream.java:145)
>         at 
> org.apache.hadoop.hdds.scm.storage.BlockInputStream.readWithStrategy(BlockInputStream.java:307)
>         at 
> org.apache.hadoop.hdds.scm.storage.ExtendedInputStream.read(ExtendedInputStream.java:56)
>         at 
> org.apache.hadoop.hdds.scm.storage.ByteArrayReader.readFromBlock(ByteArrayReader.java:57)
>         at 
> org.apache.hadoop.hdds.scm.storage.MultipartInputStream.readWithStrategy(MultipartInputStream.java:96)
>         at 
> org.apache.hadoop.hdds.scm.storage.ExtendedInputStream.read(ExtendedInputStream.java:56)
>         at 
> org.apache.hadoop.fs.ozone.OzoneFSInputStream.read(OzoneFSInputStream.java:64)
>         at java.io.DataInputStream.read(DataInputStream.java:149)
>         at 
> org.apache.hadoop.hbase.io.FileLink$FileLinkInputStream.read(FileLink.java:134)
>         at java.io.DataInputStream.read(DataInputStream.java:149)
>         at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
>         at 
> org.apache.hadoop.hbase.io.util.BlockIOUtils.readFully(BlockIOUtils.java:61)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1443)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1661)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1490)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl$1.nextBlock(HFileBlock.java:1385)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl$1.nextBlockWithBlockType(HFileBlock.java:1398)
>         at 
> org.apache.hadoop.hbase.io.hfile.HFileInfo.initMetaAndIndex(HFileInfo.java:368)
>         at 
> org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:367)
>         at 
> org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:484)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreEngine.createStoreFileAndReader(StoreEngine.java:232)
>         at 
> org.apache.hadoop.hbase.regionserver.StoreEngine.lambda$openStoreFiles$0(StoreEngine.java:270)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to