[
https://issues.apache.org/jira/browse/HDFS-15963?focusedWorklogId=580819&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580819
]
ASF GitHub Bot logged work on HDFS-15963:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 12/Apr/21 07:59
Start Date: 12/Apr/21 07:59
Worklog Time Spent: 10m
Work Description: zhangshuyan0 commented on a change in pull request
#2889:
URL: https://github.com/apache/hadoop/pull/2889#discussion_r611407579
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
##########
@@ -562,4 +565,57 @@ void writeBlock(ExtendedBlock block,
BlockConstructionStage stage,
checksum, CachingStrategy.newDefaultStrategy(), false, false,
null, null, new String[0]);
}
+
+ @Test
+ public void testReleaseVolumeRefIfExceptionThrown() throws IOException {
+ Path file = new Path("dataprotocol.dat");
+ int numDataNodes = 1;
+
+ Configuration conf = new HdfsConfiguration();
+ conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, numDataNodes);
+ MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(
+ numDataNodes).build();
+ try {
+ cluster.waitActive();
+ datanode = cluster.getFileSystem().getDataNodeStats(
+ DatanodeReportType.LIVE)[0];
+ dnAddr = NetUtils.createSocketAddr(datanode.getXferAddr());
+ FileSystem fileSys = cluster.getFileSystem();
+
+ int fileLen = Math.min(
+ conf.getInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096), 4096);
+
+ DFSTestUtil.createFile(fileSys, file, fileLen, fileLen,
+ fileSys.getDefaultBlockSize(file),
+ fileSys.getDefaultReplication(file), 0L);
+
+ // get the first blockid for the file
+ final ExtendedBlock firstBlock = DFSTestUtil.getFirstBlock(fileSys,
file);
+
+ String bpid = cluster.getNamesystem().getBlockPoolId();
+ ExtendedBlock blk = new ExtendedBlock(bpid, firstBlock.getLocalBlock());
+ sendBuf.reset();
+ recvBuf.reset();
+
+ // delete the meta file to create a exception in BlockSender constructor
+ DataNode dn = cluster.getDataNodes().get(0);
+ cluster.getMaterializedReplica(0, blk).deleteMeta();
+
+ FsVolumeImpl volume = (FsVolumeImpl) DataNodeTestUtils.getFSDataset(
+ dn).getVolume(blk);
+ int beforeCnt = volume.getReferenceCount();
+
+ sender.copyBlock(blk, BlockTokenSecretManager.DUMMY_TOKEN);
+ sendRecvData("Copy a block.", false);
+ Thread.sleep(1000);
+
+ int afterCnt = volume.getReferenceCount();
+ assertEquals(beforeCnt, afterCnt);
Review comment:
I confirmed that this case has been handled. The reference will be
closed when we close the corresponding BlockSender.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 580819)
Time Spent: 2h (was: 1h 50m)
> Unreleased volume references cause an infinite loop
> ---------------------------------------------------
>
> Key: HDFS-15963
> URL: https://issues.apache.org/jira/browse/HDFS-15963
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: Shuyan Zhang
> Assignee: Shuyan Zhang
> Priority: Major
> Labels: pull-request-available
> Attachments: HDFS-15963.001.patch, HDFS-15963.002.patch,
> HDFS-15963.003.patch
>
> Time Spent: 2h
> Remaining Estimate: 0h
>
> When BlockSender throws an exception because the meta-data cannot be found,
> the volume reference obtained by the thread is not released, which causes the
> thread trying to remove the volume to wait and fall into an infinite loop.
> {code:java}
> boolean checkVolumesRemoved() {
> Iterator<FsVolumeImpl> it = volumesBeingRemoved.iterator();
> while (it.hasNext()) {
> FsVolumeImpl volume = it.next();
> if (!volume.checkClosed()) {
> return false;
> }
> it.remove();
> }
> return true;
> }
> boolean checkClosed() {
> // always be true.
> if (this.reference.getReferenceCount() > 0) {
> FsDatasetImpl.LOG.debug("The reference count for {} is {}, wait to be 0.",
> this, reference.getReferenceCount());
> return false;
> }
> return true;
> }
> {code}
> At the same time, because the thread has been holding checkDirsLock when
> removing the volume, other threads trying to acquire the same lock will be
> permanently blocked.
> Similar problems also occur in RamDiskAsyncLazyPersistService and
> FsDatasetAsyncDiskService.
> This patch releases the three previously unreleased volume references.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]