[
https://issues.apache.org/jira/browse/HDFS-15963?focusedWorklogId=580817&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580817
]
ASF GitHub Bot logged work on HDFS-15963:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 12/Apr/21 07:49
Start Date: 12/Apr/21 07:49
Worklog Time Spent: 10m
Work Description: zhangshuyan0 commented on a change in pull request
#2889:
URL: https://github.com/apache/hadoop/pull/2889#discussion_r611401338
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java
##########
@@ -167,18 +167,26 @@ synchronized long countPendingDeletions() {
* Execute the task sometime in the future, using ThreadPools.
*/
synchronized void execute(FsVolumeImpl volume, Runnable task) {
- if (executors == null) {
- throw new RuntimeException("AsyncDiskService is already shutdown");
- }
- if (volume == null) {
- throw new RuntimeException("A null volume does not have a executor");
- }
- ThreadPoolExecutor executor = executors.get(volume.getStorageID());
- if (executor == null) {
- throw new RuntimeException("Cannot find volume " + volume
- + " for execution of task " + task);
- } else {
- executor.execute(task);
+ try {
Review comment:
The clean up code is in the finally block, so it will be executed even
if an exception occurs. Thanks for your suggestions, I will fix it to make the
style consistent.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 580817)
Time Spent: 1h 50m (was: 1h 40m)
> Unreleased volume references cause an infinite loop
> ---------------------------------------------------
>
> Key: HDFS-15963
> URL: https://issues.apache.org/jira/browse/HDFS-15963
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Reporter: Shuyan Zhang
> Assignee: Shuyan Zhang
> Priority: Major
> Labels: pull-request-available
> Attachments: HDFS-15963.001.patch, HDFS-15963.002.patch,
> HDFS-15963.003.patch
>
> Time Spent: 1h 50m
> Remaining Estimate: 0h
>
> When BlockSender throws an exception because the meta-data cannot be found,
> the volume reference obtained by the thread is not released, which causes the
> thread trying to remove the volume to wait and fall into an infinite loop.
> {code:java}
> boolean checkVolumesRemoved() {
> Iterator<FsVolumeImpl> it = volumesBeingRemoved.iterator();
> while (it.hasNext()) {
> FsVolumeImpl volume = it.next();
> if (!volume.checkClosed()) {
> return false;
> }
> it.remove();
> }
> return true;
> }
> boolean checkClosed() {
> // always be true.
> if (this.reference.getReferenceCount() > 0) {
> FsDatasetImpl.LOG.debug("The reference count for {} is {}, wait to be 0.",
> this, reference.getReferenceCount());
> return false;
> }
> return true;
> }
> {code}
> At the same time, because the thread has been holding checkDirsLock when
> removing the volume, other threads trying to acquire the same lock will be
> permanently blocked.
> Similar problems also occur in RamDiskAsyncLazyPersistService and
> FsDatasetAsyncDiskService.
> This patch releases the three previously unreleased volume references.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]