[
https://issues.apache.org/jira/browse/HDFS-16598?focusedWorklogId=780515&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780515
]
ASF GitHub Bot logged work on HDFS-16598:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 11/Jun/22 14:48
Start Date: 11/Jun/22 14:48
Worklog Time Spent: 10m
Work Description: Hexiaoqiao commented on code in PR #4366:
URL: https://github.com/apache/hadoop/pull/4366#discussion_r895034052
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -906,14 +906,19 @@ ReplicaInfo getReplicaInfo(String bpid, long blkid)
return info;
}
+ ReplicaInfo getReplicaInfoForLock(ExtendedBlock b)
Review Comment:
Great progress here. How about to integrate `getReplicaInfoForLock` ang
`getStorageUuid` together to `getStorageUuidForLock`?
```
String getStorageUuidForLock(ExtendedBlock b)
throws ReplicaNotFoundException {
return getReplicaInfo(b.getBlockPoolId(),
b.getBlockId()).getStorageUuid();
}
```
Issue Time Tracking
-------------------
Worklog Id: (was: 780515)
Time Spent: 2h 50m (was: 2h 40m)
> All datanodes
> [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]]
> are bad. Aborting...
> --------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-16598
> URL: https://issues.apache.org/jira/browse/HDFS-16598
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: ZanderXu
> Assignee: ZanderXu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 2h 50m
> Remaining Estimate: 0h
>
> org.apache.hadoop.hdfs.testPipelineRecoveryOnRestartFailure failed with the
> stack like:
> {code:java}
> java.io.IOException: All datanodes
> [DatanodeInfoWithStorage[127.0.0.1:57448,DS-1b5f7e33-a2bf-4edc-9122-a74c995a99f5,DISK]]
> are bad. Aborting...
> at
> org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1667)
> at
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1601)
> at
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1587)
> at
> org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1371)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:674)
> {code}
> After tracing the root cause, this bug was introduced by
> [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. Because the
> block GS of client may be smaller than DN when pipeline recovery failed.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]