Hexiaoqiao commented on code in PR #4141:
URL: https://github.com/apache/hadoop/pull/4141#discussion_r846794147
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -2888,9 +2920,6 @@ static ReplicaRecoveryInfo initReplicaRecoveryImpl(String
bpid, ReplicaMap map,
Block block, long recoveryId)
throws IOException, MustStopExistingWriter {
final ReplicaInfo replica = map.get(bpid, block.getBlockId());
- LOG.info("initReplicaRecovery: " + block + ", recoveryId=" + recoveryId
Review Comment:
Not sure why delete this log info here, if no anymore consideration, just
suggest to keep it.
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -2875,7 +2900,14 @@ static ReplicaRecoveryInfo initReplicaRecovery(String
bpid, ReplicaMap map,
lockManager) throws IOException {
while (true) {
try {
- try (AutoCloseDataSetLock l =
lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) {
+ ReplicaInfo replica = map.get(bpid, block.getBlockId());
+ LOG.info("initReplicaRecovery: " + block + ", recoveryId=" + recoveryId
+ + ", replica=" + replica);
+ if (replica == null) {
Review Comment:
I think it is good choice to move this statement before LOG.info.
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -629,6 +634,9 @@ public void removeVolumes(
synchronized (this) {
for (String storageUuid : storageToRemove) {
storageMap.remove(storageUuid);
+ for (String bp : volumeMap.getBlockPoolList()) {
Review Comment:
do we need to synchronized this segment? IMO, lock remove is thread safe
here, right?
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -524,6 +527,8 @@ public void addVolume(final StorageLocation location,
for (final NamespaceInfo nsInfo : nsInfos) {
String bpid = nsInfo.getBlockPoolID();
+ String vol = fsVolume.getStorageID();
+ lockManager.addLock(LockLevel.VOLUME, bpid, vol);
Review Comment:
Is this duplicate action here since `activateVolume` already invoke the same
logic to add volume/blockpool level lock?
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -2888,9 +2920,6 @@ static ReplicaRecoveryInfo initReplicaRecoveryImpl(String
bpid, ReplicaMap map,
Block block, long recoveryId)
throws IOException, MustStopExistingWriter {
final ReplicaInfo replica = map.get(bpid, block.getBlockId());
- LOG.info("initReplicaRecovery: " + block + ", recoveryId=" + recoveryId
Review Comment:
Ah, just notice that add the same log info at `initReplicaRecovery`, it make
sense to me if you want to move this log out of lock logic.
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -2860,7 +2878,14 @@ ReplicaRecoveryInfo initReplicaRecovery(String bpid,
ReplicaMap map,
Block block, long recoveryId, long xceiverStopTimeout) throws
IOException {
while (true) {
try {
- try (AutoCloseDataSetLock l =
lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) {
+ ReplicaInfo replica = map.get(bpid, block.getBlockId());
+ LOG.info("initReplicaRecovery: " + block + ", recoveryId=" + recoveryId
+ + ", replica=" + replica);
+ if (replica == null) {
Review Comment:
I think it is good choice to move this statement before LOG.info.
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java:
##########
@@ -1887,12 +1896,12 @@ public ReplicaHandler createTemporary(StorageType
storageType,
false);
}
long startHoldLockTimeMs = Time.monotonicNow();
- try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl,
- b.getBlockPoolId())) {
- FsVolumeReference ref = volumes.getNextVolume(storageType, storageId, b
- .getNumBytes());
- FsVolumeImpl v = (FsVolumeImpl) ref.getVolume();
- ReplicaInPipeline newReplicaInfo;
+ FsVolumeReference ref = volumes.getNextVolume(storageType, storageId, b
Review Comment:
Not sure if this is thread safe without any `ReadWriteLock`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]