[ 
https://issues.apache.org/jira/browse/HDFS-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16411159#comment-16411159
 ] 

liaoyuxiangqin commented on HDFS-13339:
---------------------------------------

The analysis process is as follows:
 First I get all threads stack by used jstack -l pid for unit test process, and 
I find block can't reach to 2 replaces reason is one receive DataNode is blocks 
on constructor Blockreceiver, which stack as follows:
{quote}"DataXceiver for client at /127.0.0.1:44657 [Receiving block 
BP-622300327-1521599250826:blk_1073741828_1004]" #274 daemon prio=5 os_prio=0 
tid=0x00007f2ee412e800 nid=0x57af waiting on condition [0x00007f2ed47df000]
 java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for <0x00000000e0e16bc8> (a 
java.util.concurrent.locks.ReentrantLock$FairSync)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
 at 
java.util.concurrent.locks.ReentrantLock$FairSync.lock(ReentrantLock.java:224)
 at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
 at org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock.java:100)
 at org.apache.hadoop.util.AutoCloseableLock.acquire(AutoCloseableLock.java:67)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:1579)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:213)
 at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1286)
 at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:754)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
 at java.lang.Thread.run(Thread.java:748)

Locked ownable synchronizers:
 - None{quote}
Now we can find this DataXceiver waiting on condition [0x00007f2ed47df000], and 
I can find another thread have hold this lock, which stack as follows:
{quote}"DataXceiver for client DFSClient_NONMAPREDUCE_-1762976941_11 at 
/127.0.0.1:44649 [Receiving block 
BP-622300327-1521599250826:blk_1073741825_1001]" #257 daemon prio=5 os_prio=0 
tid=0x00007f2ee4adc800 nid=0x56c4 waiting on condition [0x00007f2ed3cd6000]
 java.lang.Thread.State: TIMED_WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for <0x00000000e0e13a58> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.waitVolumeRemoved(FsVolumeList.java:273)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.handleVolumeFailures(FsVolumeList.java:257)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.handleVolumeFailures(FsDatasetImpl.java:2225)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.handleVolumeFailures(DataNode.java:3360)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.lambda$checkDiskErrorAsync$0(DataNode.java:2145)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode$$Lambda$35/1680319657.call(Unknown
 Source)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.invokeCallback(DatasetVolumeChecker.java:393)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.cleanup(DatasetVolumeChecker.java:386)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onFailure(DatasetVolumeChecker.java:369)
 at com.google.common.util.concurrent.Futures$6.run(Futures.java:977)
 at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.AbstractFuture.executeListener(AbstractFuture.java:992)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.AbstractFuture.addListener(AbstractFuture.java:695)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.AbstractFuture$TrustedFuture.addListener(AbstractFuture.java:108)
 at com.google.common.util.concurrent.Futures.addCallback(Futures.java:985)
 at com.google.common.util.concurrent.Futures.addCallback(Futures.java:915)
 at 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker.checkVolume(DatasetVolumeChecker.java:293)
 at 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskErrorAsync(DataNode.java:2136)
 at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.onFailure(FileIoProvider.java:1064)
 at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:306)
 at 
org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:308)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:936)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbw(FsVolumeImpl.java:1216)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1395)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:218)
 at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1286)
 at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:754)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
 at java.lang.Thread.run(Thread.java:748)

Locked ownable synchronizers:
 - <0x00000000e0e16bc8> (a 
java.util.concurrent.locks.ReentrantLock$FairSync){quote}
>From above thread stack I know this thread blocks on waitVolumeRemoved, and 
>through analysis relate code the blocked reason is one vloume's reference 
>count can't dec to 0, so can't remove this volume.

At this time I enable debug log on test and relate code, I find that difference 
between unit test success and fails is the volume check result handler executor 
context different, detail information as follows:
{quote}success:[DataNode DiskChecker thread 0] WARN 
checker.DatasetVolumeChecker (DatasetVolumeChecker.java:onFailure(366)) - 
Exception running disk checks against volume target/test/data/dfs/data/data1
{quote}
{quote}fails:[DataXceiver for client DFSClient_NONMAPREDUCE_-1928894852_12 at 
/127.0.0.1:57244 [Receiving block 
BP-2059472895-1521660395535:blk_1073741826_1002]] WARN 
checker.DatasetVolumeChecker (DatasetVolumeChecker.java:onFailure(366)) - 
Exception running disk checks against volume /target/test/data/dfs/data/data1
{quote}
Why executor context for volume check result handler will different in many 
unit test times? I find hdfs used MoreExecutors.sameThreadExecutor() when 
addCallback, and because the DataNode DiskChecker thread process speed may 
difference every unit test time, so will uncertainties use DataXceiver or 
DataNode DiskChecker thread to handler volume check result.

At last I trace the volume's reference count inc and dec, and find DataXceiver 
have inc 1 on this vloume by it self but which need to wait all volume dec 
reference count to 0 when handler volume check result,
 so we got the reason of volume's reference count can't dec to 0 clearly, and 
when DataXceiver  use to handler volume check result will lead to dead lock.

So I think the solution to this issue is use a new threadpool for addCallback. 
Thanks!

> TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart 
> blocks on waitReplication
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-13339
>                 URL: https://issues.apache.org/jira/browse/HDFS-13339
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>         Environment: os: Linux 2.6.32-358.el6.x86_64
> hadoop version: hadoop-3.2.0-SNAPSHOT
> unit: mvn test -Pnative 
> -Dtest=TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart
>            Reporter: liaoyuxiangqin
>            Assignee: liaoyuxiangqin
>            Priority: Critical
>              Labels: DataNode, volumes
>         Attachments: HDFS-13339.001.patch
>
>
> When i execute Unit Test of
>  TestDataNodeVolumeFailureReporting#testVolFailureStatsPreservedOnNNRestart, 
> the process blocks on waitReplication, detail information as follows:
> [INFO] -------------------------------------------------------
>  [INFO] T E S T S
>  [INFO] -------------------------------------------------------
>  [INFO] Running 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 307.492 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
>  [ERROR] 
> testVolFailureStatsPreservedOnNNRestart(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting)
>  Time elapsed: 307.206 s <<< ERROR!
>  java.util.concurrent.TimeoutException: Timed out waiting for /test1 to reach 
> 2 replicas
>  at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:800)
>  at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testVolFailureStatsPreservedOnNNRestart(TestDataNodeVolumeFailureReporting.java:283)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to