[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821039#comment-17821039
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

hadoop-yetus commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-1965971951

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 40s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  36m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  |  
hadoop-hdfs-project/hadoop-hdfs-client generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  37m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6591 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 74ab123d82e3 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9e7890090155357347b1138310a5263525390432 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/3/testReport/ |
   | Max. process+thread count | 

[jira] [Updated] (HDFS-17398) [FGL] Implement the FGL lock for FSNamesystemLock

2024-02-26 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-17398:

Description: 
Implement the FGL lock for the FSNamesystemLock.

 

We will use two global locks to implement this FGL lock in this milestone, 
FSLock and BMLock.

 

The FSLock is used to protect some operations:
 * Directory tree-related operations.
 * FSEditLog related operations.
 * ErasureCodingPolicy related operations.

The BMLock is used to protect some operations:
 * Block related operations
 * DN related operations

 

Both FSLock and BMLock are needed for some operations:
 * The operations involves both directory tree and block
 * The operations involves both directory tree and dn
 * The operations involves both FSEdits and block

 

The lock order should be:
 * Acquire the FSLock
 * Acquire the BMLock
 * Release the BMLock
 * Release the FSLock

 

This FGL class implements the FSNamesystemLock interface, and it will 
acquire/release the locks according to the LockMode(GLOBAL, FS, BM).

Locking process:
 * For the GLOBAL Lock mode, this FGL will acquire the FS lock first, then 
acquire the BM lock
 * For the FS Lock mode, this FGL will only acquire the FS lock
 * For the BM Lock mode, this FGL will only acquire the BM lock.

Lock releasing process:
 * For the GLOBAL Lock mode, this FGL will release the BM lock first, then 
release the  lock
 * For the FS Lock mode, this FGL will only release the FS lock
 * For the BM Lock mode, this FGL will only release the BM lock.

  was:
Implement the FGL lock for the FSNamesystemLock.

 

We will use two global locks to implement this FGL lock in this milestone, 
FSLock and BMLock.

 

The FSLock is used to protect some operations:
 * Directory tree-related operations.
 * FSEditLog related operations.
 * ErasureCodingPolicy related operations.

The BMLock is used to protect some operations:
 * Block related operations
 * DN related operations

 

Both FSLock and BMLock are needed for some operations:
 * The operations involves both directory tree and block
 * The operations involves both directory tree and dn
 * The operations involves both FSEdits and block

 

The lock order should be:
 * Acquire the FSLock
 * Acquire the BMLock
 * Release the BMLock
 * Release the FSLock

 

This FGL class implements the 

 


> [FGL] Implement the FGL lock for FSNamesystemLock
> -
>
> Key: HDFS-17398
> URL: https://issues.apache.org/jira/browse/HDFS-17398
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Priority: Major
>
> Implement the FGL lock for the FSNamesystemLock.
>  
> We will use two global locks to implement this FGL lock in this milestone, 
> FSLock and BMLock.
>  
> The FSLock is used to protect some operations:
>  * Directory tree-related operations.
>  * FSEditLog related operations.
>  * ErasureCodingPolicy related operations.
> The BMLock is used to protect some operations:
>  * Block related operations
>  * DN related operations
>  
> Both FSLock and BMLock are needed for some operations:
>  * The operations involves both directory tree and block
>  * The operations involves both directory tree and dn
>  * The operations involves both FSEdits and block
>  
> The lock order should be:
>  * Acquire the FSLock
>  * Acquire the BMLock
>  * Release the BMLock
>  * Release the FSLock
>  
> This FGL class implements the FSNamesystemLock interface, and it will 
> acquire/release the locks according to the LockMode(GLOBAL, FS, BM).
> Locking process:
>  * For the GLOBAL Lock mode, this FGL will acquire the FS lock first, then 
> acquire the BM lock
>  * For the FS Lock mode, this FGL will only acquire the FS lock
>  * For the BM Lock mode, this FGL will only acquire the BM lock.
> Lock releasing process:
>  * For the GLOBAL Lock mode, this FGL will release the BM lock first, then 
> release the  lock
>  * For the FS Lock mode, this FGL will only release the FS lock
>  * For the BM Lock mode, this FGL will only release the BM lock.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17398) [FGL] Implement the FGL lock for FSNamesystemLock

2024-02-26 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-17398:

Description: 
Implement the FGL lock for the FSNamesystemLock.

 

We will use two global locks to implement this FGL lock in this milestone, 
FSLock and BMLock.

 

The FSLock is used to protect some operations:
 * Directory tree-related operations.
 * FSEditLog related operations.
 * ErasureCodingPolicy related operations.

The BMLock is used to protect some operations:
 * Block related operations
 * DN related operations

 

Both FSLock and BMLock are needed for some operations:
 * The operations involves both directory tree and block
 * The operations involves both directory tree and dn
 * The operations involves both FSEdits and block

 

The lock order should be:
 * Acquire the FSLock
 * Acquire the BMLock
 * Release the BMLock
 * Release the FSLock

 

This FGL class implements the 

 

  was:
Implement the FGL lock for the FSNamesystemLock.

 


> [FGL] Implement the FGL lock for FSNamesystemLock
> -
>
> Key: HDFS-17398
> URL: https://issues.apache.org/jira/browse/HDFS-17398
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Priority: Major
>
> Implement the FGL lock for the FSNamesystemLock.
>  
> We will use two global locks to implement this FGL lock in this milestone, 
> FSLock and BMLock.
>  
> The FSLock is used to protect some operations:
>  * Directory tree-related operations.
>  * FSEditLog related operations.
>  * ErasureCodingPolicy related operations.
> The BMLock is used to protect some operations:
>  * Block related operations
>  * DN related operations
>  
> Both FSLock and BMLock are needed for some operations:
>  * The operations involves both directory tree and block
>  * The operations involves both directory tree and dn
>  * The operations involves both FSEdits and block
>  
> The lock order should be:
>  * Acquire the FSLock
>  * Acquire the BMLock
>  * Release the BMLock
>  * Release the FSLock
>  
> This FGL class implements the 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17398) [FGL] Implement the FGL lock for FSNamesystemLock

2024-02-26 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-17398:

Summary: [FGL] Implement the FGL lock for FSNamesystemLock  (was: [FGL] 
Implement the FGL lock)

> [FGL] Implement the FGL lock for FSNamesystemLock
> -
>
> Key: HDFS-17398
> URL: https://issues.apache.org/jira/browse/HDFS-17398
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17398) [FGL] Implement the FGL lock for FSNamesystemLock

2024-02-26 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-17398:

Description: 
Implement the FGL lock for the FSNamesystemLock.

 

> [FGL] Implement the FGL lock for FSNamesystemLock
> -
>
> Key: HDFS-17398
> URL: https://issues.apache.org/jira/browse/HDFS-17398
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Priority: Major
>
> Implement the FGL lock for the FSNamesystemLock.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17398) [FGL] Implement the FGL lock

2024-02-26 Thread ZanderXu (Jira)
ZanderXu created HDFS-17398:
---

 Summary: [FGL] Implement the FGL lock
 Key: HDFS-17398
 URL: https://issues.apache.org/jira/browse/HDFS-17398
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: ZanderXu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17387) [FGL] Abstract the configurable locking mode

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821002#comment-17821002
 ] 

ASF GitHub Bot commented on HDFS-17387:
---

ZanderXu commented on code in PR #6572:
URL: https://github.com/apache/hadoop/pull/6572#discussion_r1503727005


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/fgl/FineGrainedFSNamesystemLock.java:
##


Review Comment:
   Of course, I will create a new ticket only for this FGL implementation and 
describe it in detail.





> [FGL] Abstract the configurable locking mode
> 
>
> Key: HDFS-17387
> URL: https://issues.apache.org/jira/browse/HDFS-17387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Abstract a lock mode to cover the current global lock and the new 
> fine-grained lock(global FS lock and global BM lock).
> End-user can select to use lock mode through configuration.
> The possible lock modes after this patch are as follows:
>  * GLOBAL Lock
>  * FS Lock
>  * BM Lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17387) [FGL] Abstract the configurable locking mode

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821000#comment-17821000
 ] 

ASF GitHub Bot commented on HDFS-17387:
---

ZanderXu commented on code in PR #6572:
URL: https://github.com/apache/hadoop/pull/6572#discussion_r1503725124


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -871,7 +876,10 @@ static FSNamesystem loadFromDisk(Configuration conf) 
throws IOException {
 this.contextFieldSeparator =
 conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
 HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
-fsLock = new FSNamesystemLock(conf, detailedLockHoldTimeMetrics);
+Class lockKlass = conf.getClass(
+DFS_NAMENODE_LOCK_MODEL_PROVIDER_KEY, 
DFS_NAMENODE_LOCK_MODEL_PROVIDER_DEFAULT,
+FSNamesystemLock.class);
+fsLock = createLock(lockKlass, conf, detailedLockHoldTimeMetrics);

Review Comment:
   Yes, `GlobalFSNamesystemLock` should be the default class. 
   
   But in order to find dead locks for the other PRs via UT, I just change this 
value to FGL first.
   
   After we complete all sub-tasks of this milestone, I will change the default 
value to GlobalFSNamesystemLock.





> [FGL] Abstract the configurable locking mode
> 
>
> Key: HDFS-17387
> URL: https://issues.apache.org/jira/browse/HDFS-17387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Abstract a lock mode to cover the current global lock and the new 
> fine-grained lock(global FS lock and global BM lock).
> End-user can select to use lock mode through configuration.
> The possible lock modes after this patch are as follows:
>  * GLOBAL Lock
>  * FS Lock
>  * BM Lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17387) [FGL] Abstract the configurable locking mode

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820992#comment-17820992
 ] 

ASF GitHub Bot commented on HDFS-17387:
---

ferhui commented on code in PR #6572:
URL: https://github.com/apache/hadoop/pull/6572#discussion_r1503709489


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/fgl/FineGrainedFSNamesystemLock.java:
##


Review Comment:
   How about introducing the FGL implementation in another ticket. it's will be 
easy for review



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -871,7 +876,10 @@ static FSNamesystem loadFromDisk(Configuration conf) 
throws IOException {
 this.contextFieldSeparator =
 conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
 HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
-fsLock = new FSNamesystemLock(conf, detailedLockHoldTimeMetrics);
+Class lockKlass = conf.getClass(
+DFS_NAMENODE_LOCK_MODEL_PROVIDER_KEY, 
DFS_NAMENODE_LOCK_MODEL_PROVIDER_DEFAULT,
+FSNamesystemLock.class);
+fsLock = createLock(lockKlass, conf, detailedLockHoldTimeMetrics);

Review Comment:
   This is used for the FGL implementation, right? GlobalFSNamesystemLock is 
the original lock? If Im right, GlobalFSNamesystemLock should be the default 
class?



##
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml:
##
@@ -3956,6 +3956,15 @@
   
 
 
+
+  dfs.namenode.lock.model.provider.class
+  
org.apache.hadoop.hdfs.server.namenode.fgl.FineGrainedFSNamesystemLock
+  
+An implementation class of FSNamesystem lock.
+Defaults to GlobalFSNamesystemLock.class

Review Comment:
   And here we should set GlobalFSNamesystemLock by default.





> [FGL] Abstract the configurable locking mode
> 
>
> Key: HDFS-17387
> URL: https://issues.apache.org/jira/browse/HDFS-17387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Abstract a lock mode to cover the current global lock and the new 
> fine-grained lock(global FS lock and global BM lock).
> End-user can select to use lock mode through configuration.
> The possible lock modes after this patch are as follows:
>  * GLOBAL Lock
>  * FS Lock
>  * BM Lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17387) [FGL] Abstract the configurable locking mode

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820980#comment-17820980
 ] 

ASF GitHub Bot commented on HDFS-17387:
---

ZanderXu commented on PR #6572:
URL: https://github.com/apache/hadoop/pull/6572#issuecomment-1965866817

   > Since [HDFS-17394](https://issues.apache.org/jira/browse/HDFS-17394) has 
been merged. can rebase this PR.
   
   done




> [FGL] Abstract the configurable locking mode
> 
>
> Key: HDFS-17387
> URL: https://issues.apache.org/jira/browse/HDFS-17387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Abstract a lock mode to cover the current global lock and the new 
> fine-grained lock(global FS lock and global BM lock).
> End-user can select to use lock mode through configuration.
> The possible lock modes after this patch are as follows:
>  * GLOBAL Lock
>  * FS Lock
>  * BM Lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17387) [FGL] Abstract the configurable locking mode

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820979#comment-17820979
 ] 

ASF GitHub Bot commented on HDFS-17387:
---

ferhui commented on PR #6572:
URL: https://github.com/apache/hadoop/pull/6572#issuecomment-1965862096

   Since HDFS-17394 has been merged. can rebase this PR.




> [FGL] Abstract the configurable locking mode
> 
>
> Key: HDFS-17387
> URL: https://issues.apache.org/jira/browse/HDFS-17387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Abstract a lock mode to cover the current global lock and the new 
> fine-grained lock(global FS lock and global BM lock).
> End-user can select to use lock mode through configuration.
> The possible lock modes after this patch are as follows:
>  * GLOBAL Lock
>  * FS Lock
>  * BM Lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17387) [FGL] Abstract the configurable locking mode

2024-02-26 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei updated HDFS-17387:
---
Summary: [FGL] Abstract the configurable locking mode  (was: [FGL] Abstract 
selectable locking mode)

> [FGL] Abstract the configurable locking mode
> 
>
> Key: HDFS-17387
> URL: https://issues.apache.org/jira/browse/HDFS-17387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> Abstract a lock mode to cover the current global lock and the new 
> fine-grained lock(global FS lock and global BM lock).
> End-user can select to use lock mode through configuration.
> The possible lock modes after this patch are as follows:
>  * GLOBAL Lock
>  * FS Lock
>  * BM Lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17394) [FGL] Remove unused WriteHoldCount of FSNamesystemLock

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820978#comment-17820978
 ] 

ASF GitHub Bot commented on HDFS-17394:
---

ferhui commented on PR #6571:
URL: https://github.com/apache/hadoop/pull/6571#issuecomment-1965856467

   @ZanderXu Thanks for this patch. @xinglin Thanks for reviewing it. Merged.




> [FGL] Remove unused WriteHoldCount of FSNamesystemLock
> --
>
> Key: HDFS-17394
> URL: https://issues.apache.org/jira/browse/HDFS-17394
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> public int getWriteHoldCount() {
>   return this.fsLock.getWriteHoldCount(FSNamesystemLockMode.GLOBAL);
> }
> @Deprecated // dirLock is obsolete, use namesystem.fsLock instead
> public int getWriteHoldCount() {
>   return namesystem.getWriteHoldCount();
> }
> // sanity check.
> if (!hadDirReadLock || !hadFsnReadLock || hadDirWriteLock ||
> hadFsnWriteLock || dir.getReadHoldCount() != 1 ||
> fsn.getReadHoldCount() != 1) {
>   // cannot relinquish
>   return false;
> } {code}
> getWriteHoldCount in FSNamesystem.java and FSDirectory.java is unused.
> dir.getReadHoldCount() is useless as it's same as fsn.getReadHoldCount().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17394) [FGL] Remove unused WriteHoldCount of FSNamesystemLock

2024-02-26 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei resolved HDFS-17394.

Resolution: Fixed

> [FGL] Remove unused WriteHoldCount of FSNamesystemLock
> --
>
> Key: HDFS-17394
> URL: https://issues.apache.org/jira/browse/HDFS-17394
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> public int getWriteHoldCount() {
>   return this.fsLock.getWriteHoldCount(FSNamesystemLockMode.GLOBAL);
> }
> @Deprecated // dirLock is obsolete, use namesystem.fsLock instead
> public int getWriteHoldCount() {
>   return namesystem.getWriteHoldCount();
> }
> // sanity check.
> if (!hadDirReadLock || !hadFsnReadLock || hadDirWriteLock ||
> hadFsnWriteLock || dir.getReadHoldCount() != 1 ||
> fsn.getReadHoldCount() != 1) {
>   // cannot relinquish
>   return false;
> } {code}
> getWriteHoldCount in FSNamesystem.java and FSDirectory.java is unused.
> dir.getReadHoldCount() is useless as it's same as fsn.getReadHoldCount().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17394) [FGL] Remove unused WriteHoldCount of FSNamesystemLock

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820977#comment-17820977
 ] 

ASF GitHub Bot commented on HDFS-17394:
---

ferhui merged PR #6571:
URL: https://github.com/apache/hadoop/pull/6571




> [FGL] Remove unused WriteHoldCount of FSNamesystemLock
> --
>
> Key: HDFS-17394
> URL: https://issues.apache.org/jira/browse/HDFS-17394
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> public int getWriteHoldCount() {
>   return this.fsLock.getWriteHoldCount(FSNamesystemLockMode.GLOBAL);
> }
> @Deprecated // dirLock is obsolete, use namesystem.fsLock instead
> public int getWriteHoldCount() {
>   return namesystem.getWriteHoldCount();
> }
> // sanity check.
> if (!hadDirReadLock || !hadFsnReadLock || hadDirWriteLock ||
> hadFsnWriteLock || dir.getReadHoldCount() != 1 ||
> fsn.getReadHoldCount() != 1) {
>   // cannot relinquish
>   return false;
> } {code}
> getWriteHoldCount in FSNamesystem.java and FSDirectory.java is unused.
> dir.getReadHoldCount() is useless as it's same as fsn.getReadHoldCount().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17391) Adjust the checkpoint io buffer size to the chunk size

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820953#comment-17820953
 ] 

ASF GitHub Bot commented on HDFS-17391:
---

ThinkerLei closed pull request #6565: HDFS-17391:Adjust the checkpoint io 
buffer size to the chunk size
URL: https://github.com/apache/hadoop/pull/6565




> Adjust the checkpoint io buffer size to the chunk size
> --
>
> Key: HDFS-17391
> URL: https://issues.apache.org/jira/browse/HDFS-17391
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lei w
>Priority: Major
>  Labels: pull-request-available
>
> Adjust the checkpoint io buffer size to the chunk size to reduce checkpoint 
> time.
> Before change:
> 2022-07-11 07:10:50,900 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with 
> txid 374700896827 to namenode at http://:50070 in 1729.465 seconds
> After change:
> 2022-07-12 08:15:55,068 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with 
> txid 375717629244 to namenode at http://:50070  in 858.668 seconds



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820952#comment-17820952
 ] 

ASF GitHub Bot commented on HDFS-17383:
---

ThinkerLei commented on code in PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#discussion_r1503615979


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
##
@@ -2043,10 +2043,13 @@ public void setBalancerBandwidth(long bandwidth) throws 
IOException {
 }
   }
   
-  public void markAllDatanodesStale() {
+  public void markAllDatanodesStaleAndSetNeedKeyUpdate() {
 LOG.info("Marking all datanodes as stale");
 synchronized (this) {
   for (DatanodeDescriptor dn : datanodeMap.values()) {
+if (blockManager.isBlockTokenEnabled()) {
+  dn.setNeedKeyUpdate(true);

Review Comment:
   To ensure datanode current block token is  come from active namenode.





> Datanode current block token should come from active NameNode in HA mode
> 
>
> Key: HDFS-17383
> URL: https://issues.apache.org/jira/browse/HDFS-17383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Priority: Major
>  Labels: pull-request-available
> Attachments: reproduce.diff
>
>
> We found that transfer block failed during the namenode upgrade. The specific 
> error reported was that the block token verification failed. The reason is 
> that during the datanode transfer block process, the source datanode uses its 
> own generated block token, and the keyid comes from ANN or SBN. However, 
> because the newly upgraded NN has just been started, the keyid owned by the 
> source datanode may not be owned by the target datanode, so the write fails. 
> Here's how to reproduce this situation in the attachment



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820951#comment-17820951
 ] 

ASF GitHub Bot commented on HDFS-17383:
---

zhangshuyan0 commented on code in PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#discussion_r1503611470


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
##
@@ -2043,10 +2043,13 @@ public void setBalancerBandwidth(long bandwidth) throws 
IOException {
 }
   }
   
-  public void markAllDatanodesStale() {
+  public void markAllDatanodesStaleAndSetNeedKeyUpdate() {
 LOG.info("Marking all datanodes as stale");
 synchronized (this) {
   for (DatanodeDescriptor dn : datanodeMap.values()) {
+if (blockManager.isBlockTokenEnabled()) {
+  dn.setNeedKeyUpdate(true);

Review Comment:
   Why do you `setNeedKeyUpdate` here?





> Datanode current block token should come from active NameNode in HA mode
> 
>
> Key: HDFS-17383
> URL: https://issues.apache.org/jira/browse/HDFS-17383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Priority: Major
>  Labels: pull-request-available
> Attachments: reproduce.diff
>
>
> We found that transfer block failed during the namenode upgrade. The specific 
> error reported was that the block token verification failed. The reason is 
> that during the datanode transfer block process, the source datanode uses its 
> own generated block token, and the keyid comes from ANN or SBN. However, 
> because the newly upgraded NN has just been started, the keyid owned by the 
> source datanode may not be owned by the target datanode, so the write fails. 
> Here's how to reproduce this situation in the attachment



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820950#comment-17820950
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

hadoop-yetus commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-1965754290

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 36s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  34m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 134m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6591 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2b3d0402a623 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a05d1d4043d42c7f08885245948518477c914310 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/2/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 

[jira] [Commented] (HDFS-17352) Add configuration to control whether DN delete this replica from disk when client requests a missing block

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820944#comment-17820944
 ] 

ASF GitHub Bot commented on HDFS-17352:
---

zhangshuyan0 commented on code in PR #6559:
URL: https://github.com/apache/hadoop/pull/6559#discussion_r1503587650


##
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml:
##
@@ -3982,6 +3982,17 @@
   
 
 
+
+  dfs.datanode.delete.corrupt.replica.from.disk.enable
+  true

Review Comment:
   If the default value is true, there is a risk of block missing according to 
[HDFS-16985](https://issues.apache.org/jira/browse/HDFS-16985). I suggest 
setting the default value to false, as block missing is a more serious problem 
than disk file deletion delay. What's your opion?





> Add configuration to control whether DN delete this replica from disk when 
> client requests a missing block 
> ---
>
> Key: HDFS-17352
> URL: https://issues.apache.org/jira/browse/HDFS-17352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> As discussed at 
> https://github.com/apache/hadoop/pull/6464#issuecomment-1902959898
> we should add configuration to control whether DN delete this replica from 
> disk when client requests a missing block.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17396) BootstrapStandby should download rollback image during RollingUpgrade

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820940#comment-17820940
 ] 

ASF GitHub Bot commented on HDFS-17396:
---

hadoop-yetus commented on PR #6583:
URL: https://github.com/apache/hadoop/pull/6583#issuecomment-1965705273

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   2m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   0m 42s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  19m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  cc  |   2m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  cc  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 37s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/3/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 4 new + 104 unchanged - 0 fixed = 
108 total (was 104)  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 32s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 199m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |   0m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch 

[jira] [Commented] (HDFS-17352) Add configuration to control whether DN delete this replica from disk when client requests a missing block

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820937#comment-17820937
 ] 

ASF GitHub Bot commented on HDFS-17352:
---

haiyang1987 commented on PR #6559:
URL: https://github.com/apache/hadoop/pull/6559#issuecomment-1965674193

   Hi @ZanderXu @tomscut @zhangshuyan0 @tasanuma  please help me review again 
this PR when you are free, thanks ~




> Add configuration to control whether DN delete this replica from disk when 
> client requests a missing block 
> ---
>
> Key: HDFS-17352
> URL: https://issues.apache.org/jira/browse/HDFS-17352
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> As discussed at 
> https://github.com/apache/hadoop/pull/6464#issuecomment-1902959898
> we should add configuration to control whether DN delete this replica from 
> disk when client requests a missing block.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17358) EC: infinite lease recovery caused by the length of RWR equals to zero.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820935#comment-17820935
 ] 

ASF GitHub Bot commented on HDFS-17358:
---

zhangshuyan0 commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1965670563

   Committed to trunk. Thanks for your contributions! @hfutatzhanghb 
@haiyang1987 @tomscut 




> EC: infinite lease recovery caused by the length of RWR equals to zero.
> ---
>
> Key: HDFS-17358
> URL: https://issues.apache.org/jira/browse/HDFS-17358
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> Recently, there is a strange case happened on our ec production cluster.
> The phenomenon is as below described: NameNode does infinite recovery lease 
> of some ec files(~80K+) and those files could never be closed.
>  
> After digging into logs and releated code, we found the root cause is below 
> codes in method `BlockRecoveryWorker$RecoveryTaskStriped#recover`:
> {code:java}
>           // we met info.getNumBytes==0 here! 
>   if (info != null &&
>               info.getGenerationStamp() >= block.getGenerationStamp() &&
>               info.getNumBytes() > 0) {
>             final BlockRecord existing = syncBlocks.get(blockId);
>             if (existing == null ||
>                 info.getNumBytes() > existing.rInfo.getNumBytes()) {
>               // if we have >1 replicas for the same internal block, we
>               // simply choose the one with larger length.
>               // TODO: better usage of redundant replicas
>               syncBlocks.put(blockId, new BlockRecord(id, proxyDN, info));
>             }
>           }
>   // throw exception here!
>           checkLocations(syncBlocks.size());
> {code}
> The related logs are as below:
> {code:java}
> java.io.IOException: 
> BP-1157541496-10.104.10.198-1702548776421:blk_-9223372036808032688_2938828 
> has no enough internal blocks, unable to start recovery. Locations=[...] 
> {code}
> {code:java}
> 2024-01-23 12:48:16,171 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> initReplicaRecovery: blk_-9223372036808032686_2938828, recoveryId=27615365, 
> replica=ReplicaUnderRecovery, blk_-9223372036808032686_2938828, RUR 
> getNumBytes() = 0 getBytesOnDisk() = 0 getVisibleLength()= -1 getVolume() = 
> /data25/hadoop/hdfs/datanode getBlockURI() = 
> file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-x.x.x.x-1702548776421/current/rbw/blk_-9223372036808032686
>  recoveryId=27529675 original=ReplicaWaitingToBeRecovered, 
> blk_-9223372036808032686_2938828, RWR getNumBytes() = 0 getBytesOnDisk() = 0 
> getVisibleLength()= -1 getVolume() = /data25/hadoop/hdfs/datanode 
> getBlockURI() = 
> file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-10.104.10.198-1702548776421/current/rbw/blk_-9223372036808032686
> {code}
> because the length of RWR is zero,  the length of the returned object in 
> below codes is zero. We can't put it into syncBlocks.
> So throw exception in checkLocations method.
> {code:java}
>           ReplicaRecoveryInfo info = callInitReplicaRecovery(proxyDN,
>               new RecoveringBlock(internalBlk, null, recoveryId)); {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17391) Adjust the checkpoint io buffer size to the chunk size

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820934#comment-17820934
 ] 

ASF GitHub Bot commented on HDFS-17391:
---

ThinkerLei opened a new pull request, #6594:
URL: https://github.com/apache/hadoop/pull/6594

   Adjust the checkpoint io buffer size to the chunk size to reduce checkpoint 
time.
   Before change:
   2022-07-11 07:10:50,900 INFO 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with 
txid 374700896827 to namenode at http://:50070/ in 1729.465 seconds
   After change:
   2022-07-12 08:15:55,068 INFO 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with 
txid 375717629244 to namenode at http://:50070/ in 858.668 seconds
   
   




> Adjust the checkpoint io buffer size to the chunk size
> --
>
> Key: HDFS-17391
> URL: https://issues.apache.org/jira/browse/HDFS-17391
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lei w
>Priority: Major
>  Labels: pull-request-available
>
> Adjust the checkpoint io buffer size to the chunk size to reduce checkpoint 
> time.
> Before change:
> 2022-07-11 07:10:50,900 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with 
> txid 374700896827 to namenode at http://:50070 in 1729.465 seconds
> After change:
> 2022-07-12 08:15:55,068 INFO 
> org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Uploaded image with 
> txid 375717629244 to namenode at http://:50070  in 858.668 seconds



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17358) EC: infinite lease recovery caused by the length of RWR equals to zero.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820933#comment-17820933
 ] 

ASF GitHub Bot commented on HDFS-17358:
---

zhangshuyan0 merged PR #6509:
URL: https://github.com/apache/hadoop/pull/6509




> EC: infinite lease recovery caused by the length of RWR equals to zero.
> ---
>
> Key: HDFS-17358
> URL: https://issues.apache.org/jira/browse/HDFS-17358
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> Recently, there is a strange case happened on our ec production cluster.
> The phenomenon is as below described: NameNode does infinite recovery lease 
> of some ec files(~80K+) and those files could never be closed.
>  
> After digging into logs and releated code, we found the root cause is below 
> codes in method `BlockRecoveryWorker$RecoveryTaskStriped#recover`:
> {code:java}
>           // we met info.getNumBytes==0 here! 
>   if (info != null &&
>               info.getGenerationStamp() >= block.getGenerationStamp() &&
>               info.getNumBytes() > 0) {
>             final BlockRecord existing = syncBlocks.get(blockId);
>             if (existing == null ||
>                 info.getNumBytes() > existing.rInfo.getNumBytes()) {
>               // if we have >1 replicas for the same internal block, we
>               // simply choose the one with larger length.
>               // TODO: better usage of redundant replicas
>               syncBlocks.put(blockId, new BlockRecord(id, proxyDN, info));
>             }
>           }
>   // throw exception here!
>           checkLocations(syncBlocks.size());
> {code}
> The related logs are as below:
> {code:java}
> java.io.IOException: 
> BP-1157541496-10.104.10.198-1702548776421:blk_-9223372036808032688_2938828 
> has no enough internal blocks, unable to start recovery. Locations=[...] 
> {code}
> {code:java}
> 2024-01-23 12:48:16,171 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> initReplicaRecovery: blk_-9223372036808032686_2938828, recoveryId=27615365, 
> replica=ReplicaUnderRecovery, blk_-9223372036808032686_2938828, RUR 
> getNumBytes() = 0 getBytesOnDisk() = 0 getVisibleLength()= -1 getVolume() = 
> /data25/hadoop/hdfs/datanode getBlockURI() = 
> file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-x.x.x.x-1702548776421/current/rbw/blk_-9223372036808032686
>  recoveryId=27529675 original=ReplicaWaitingToBeRecovered, 
> blk_-9223372036808032686_2938828, RWR getNumBytes() = 0 getBytesOnDisk() = 0 
> getVisibleLength()= -1 getVolume() = /data25/hadoop/hdfs/datanode 
> getBlockURI() = 
> file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-10.104.10.198-1702548776421/current/rbw/blk_-9223372036808032686
> {code}
> because the length of RWR is zero,  the length of the returned object in 
> below codes is zero. We can't put it into syncBlocks.
> So throw exception in checkLocations method.
> {code:java}
>           ReplicaRecoveryInfo info = callInitReplicaRecovery(proxyDN,
>               new RecoveringBlock(internalBlk, null, recoveryId)); {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17358) EC: infinite lease recovery caused by the length of RWR equals to zero.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820928#comment-17820928
 ] 

ASF GitHub Bot commented on HDFS-17358:
---

hfutatzhanghb commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1965657242

   The failed UTs were all passed in my local.




> EC: infinite lease recovery caused by the length of RWR equals to zero.
> ---
>
> Key: HDFS-17358
> URL: https://issues.apache.org/jira/browse/HDFS-17358
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> Recently, there is a strange case happened on our ec production cluster.
> The phenomenon is as below described: NameNode does infinite recovery lease 
> of some ec files(~80K+) and those files could never be closed.
>  
> After digging into logs and releated code, we found the root cause is below 
> codes in method `BlockRecoveryWorker$RecoveryTaskStriped#recover`:
> {code:java}
>           // we met info.getNumBytes==0 here! 
>   if (info != null &&
>               info.getGenerationStamp() >= block.getGenerationStamp() &&
>               info.getNumBytes() > 0) {
>             final BlockRecord existing = syncBlocks.get(blockId);
>             if (existing == null ||
>                 info.getNumBytes() > existing.rInfo.getNumBytes()) {
>               // if we have >1 replicas for the same internal block, we
>               // simply choose the one with larger length.
>               // TODO: better usage of redundant replicas
>               syncBlocks.put(blockId, new BlockRecord(id, proxyDN, info));
>             }
>           }
>   // throw exception here!
>           checkLocations(syncBlocks.size());
> {code}
> The related logs are as below:
> {code:java}
> java.io.IOException: 
> BP-1157541496-10.104.10.198-1702548776421:blk_-9223372036808032688_2938828 
> has no enough internal blocks, unable to start recovery. Locations=[...] 
> {code}
> {code:java}
> 2024-01-23 12:48:16,171 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> initReplicaRecovery: blk_-9223372036808032686_2938828, recoveryId=27615365, 
> replica=ReplicaUnderRecovery, blk_-9223372036808032686_2938828, RUR 
> getNumBytes() = 0 getBytesOnDisk() = 0 getVisibleLength()= -1 getVolume() = 
> /data25/hadoop/hdfs/datanode getBlockURI() = 
> file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-x.x.x.x-1702548776421/current/rbw/blk_-9223372036808032686
>  recoveryId=27529675 original=ReplicaWaitingToBeRecovered, 
> blk_-9223372036808032686_2938828, RWR getNumBytes() = 0 getBytesOnDisk() = 0 
> getVisibleLength()= -1 getVolume() = /data25/hadoop/hdfs/datanode 
> getBlockURI() = 
> file:/data25/hadoop/hdfs/datanode/current/BP-1157541496-10.104.10.198-1702548776421/current/rbw/blk_-9223372036808032686
> {code}
> because the length of RWR is zero,  the length of the returned object in 
> below codes is zero. We can't put it into syncBlocks.
> So throw exception in checkLocations method.
> {code:java}
>           ReplicaRecoveryInfo info = callInitReplicaRecovery(proxyDN,
>               new RecoveringBlock(internalBlk, null, recoveryId)); {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820922#comment-17820922
 ] 

ASF GitHub Bot commented on HDFS-17365:
---

hfutatzhanghb commented on code in PR #6517:
URL: https://github.com/apache/hadoop/pull/6517#discussion_r1503518197


##
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml:
##
@@ -3908,6 +3908,18 @@
   
 
 
+
+  dfs.client.ec.EXAMPLEECPOLICYNAME.checkstreamer.redunency

Review Comment:
   @tasanuma Sir, thanks a lot for your reviewing and valuable opinion.  Agree 
with you, We should make the configuration become more intuitive.





> EC: Add extra redunency configuration in checkStreamerFailures to prevent 
> data loss.
> 
>
> Key: HDFS-17365
> URL: https://issues.apache.org/jira/browse/HDFS-17365
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820918#comment-17820918
 ] 

ASF GitHub Bot commented on HDFS-17299:
---

hadoop-yetus commented on PR #6566:
URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1965614194

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   5m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 39s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/9/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  41m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  42m 29s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   6m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   5m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 19s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/9/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 17 new + 244 unchanged - 2 fixed = 
261 total (was 246)  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 256m 54s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 448m 50s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 

[jira] [Commented] (HDFS-17396) BootstrapStandby should download rollback image during RollingUpgrade

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820875#comment-17820875
 ] 

ASF GitHub Bot commented on HDFS-17396:
---

hadoop-yetus commented on PR #6583:
URL: https://github.com/apache/hadoop/pull/6583#issuecomment-1965318424

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   2m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   0m 43s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  19m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 29s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 15s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | -1 :x: |  compile  |   0m 42s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  cc  |   0m 42s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 42s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 38s | 
[/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-hdfs-project in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  cc  |   0m 38s | 
[/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/2/artifact/out/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  

[jira] [Commented] (HDFS-17378) Missing operationType for some operations in authorizer

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820871#comment-17820871
 ] 

ASF GitHub Bot commented on HDFS-17378:
---

hadoop-yetus commented on PR #6553:
URL: https://github.com/apache/hadoop/pull/6553#issuecomment-1965290806

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 109 unchanged - 1 
fixed = 109 total (was 110)  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 204m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6553/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 292m 18s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6553/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6553 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d4cc6db0a8fa 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d68e060e85d2f356da6885c2e33508abe8cbf11c |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Commented] (HDFS-17358) EC: infinite lease recovery caused by the length of RWR equals to zero.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820797#comment-17820797
 ] 

ASF GitHub Bot commented on HDFS-17358:
---

hadoop-yetus commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1964732195

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 253m 32s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/29/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 423m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/29/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c38cac31b717 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b0b1ef7756ba5e37d23eddb91821a20bd1520dad |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/29/testReport/ |
   | Max. process+thread count | 3048 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820792#comment-17820792
 ] 

ASF GitHub Bot commented on HDFS-17299:
---

hadoop-yetus commented on PR #6566:
URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1964720571

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 22s |  |  
https://github.com/apache/hadoop/pull/6566 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6566 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/8/console |
   | versions | git=2.34.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> HDFS is not rack failure tolerant while creating a new file.
> 
>
> Key: HDFS-17299
> URL: https://issues.apache.org/jira/browse/HDFS-17299
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.1
>Reporter: Rushabh Shah
>Assignee: Ritesh
>Priority: Critical
>  Labels: pull-request-available
> Attachments: repro.patch
>
>
> Recently we saw an HBase cluster outage when we mistakenly brought down 1 AZ.
> Our configuration:
> 1. We use 3 Availability Zones (AZs) for fault tolerance.
> 2. We use BlockPlacementPolicyRackFaultTolerant as the block placement policy.
> 3. We use the following configuration parameters: 
> dfs.namenode.heartbeat.recheck-interval: 60 
> dfs.heartbeat.interval: 3 
> So it will take 123 ms (20.5mins) to detect that datanode is dead.
>  
> Steps to reproduce:
>  # Bring down 1 AZ.
>  # HBase (HDFS client) tries to create a file (WAL file) and then calls 
> hflush on the newly created file.
>  # DataStreamer is not able to find blocks locations that satisfies the rack 
> placement policy (one copy in each rack which essentially means one copy in 
> each AZ)
>  # Since all the datanodes in that AZ are down but still alive to namenode, 
> the client gets different datanodes but still all of them are in the same AZ. 
> See logs below.
>  # HBase is not able to create a WAL file and it aborts the region server.
>  
> Relevant logs from hdfs client and namenode
>  
> {noformat}
> 2023-12-16 17:17:43,818 INFO  [on default port 9000] FSNamesystem.audit - 
> allowed=trueugi=hbase/ (auth:KERBEROS) ip=  
> cmd=create  src=/hbase/WALs/  dst=null
> 2023-12-16 17:17:43,978 INFO  [on default port 9000] hdfs.StateChange - 
> BLOCK* allocate blk_1214652565_140946716, replicas=:50010, 
> :50010, :50010 for /hbase/WALs/
> 2023-12-16 17:17:44,061 INFO  [Thread-39087] hdfs.DataStreamer - Exception in 
> createBlockOutputStream
> java.io.IOException: Got error, status=ERROR, status message , ack with 
> firstBadLink as :50010
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:113)
> at 
> org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1747)
> at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1651)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:715)
> 2023-12-16 17:17:44,061 WARN  [Thread-39087] hdfs.DataStreamer - Abandoning 
> BP-179318874--1594838129323:blk_1214652565_140946716
> 2023-12-16 17:17:44,179 WARN  [Thread-39087] hdfs.DataStreamer - Excluding 
> datanode 
> DatanodeInfoWithStorage[:50010,DS-a493abdb-3ac3-49b1-9bfb-848baf5c1c2c,DISK]
> 2023-12-16 17:17:44,339 INFO  [on default port 9000] hdfs.StateChange - 
> BLOCK* allocate blk_1214652580_140946764, replicas=:50010, 
> :50010, :50010 for /hbase/WALs/
> 2023-12-16 17:17:44,369 INFO  [Thread-39087] hdfs.DataStreamer - Exception in 
> createBlockOutputStream
> java.io.IOException: Got error, status=ERROR, status message , ack with 
> firstBadLink as :50010
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:113)
> at 
> org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1747)
> at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1651)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:715)
> 2023-12-16 17:17:44,369 WARN  [Thread-39087] hdfs.DataStreamer - Abandoning 
> 

[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820752#comment-17820752
 ] 

ASF GitHub Bot commented on HDFS-17365:
---

tasanuma commented on code in PR #6517:
URL: https://github.com/apache/hadoop/pull/6517#discussion_r1502860434


##
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml:
##
@@ -3908,6 +3908,18 @@
   
 
 
+
+  dfs.client.ec.EXAMPLEECPOLICYNAME.checkstreamer.redunency

Review Comment:
   @hfutatzhanghb  Thanks for the PR. I think it's a good feature.
   
   In my honest opinion, 
`dfs.client.ec.EXAMPLEECPOLICYNAME.checkstreamer.redunency` is 
counter-intuitive. I prefer a setting that interprets values in a reverse way. 
In other words, it would be something like 
`dfs.client.ec.EXAMPLEECPOLICYNAME.failed.write.block.tolerated`, where if the 
value is 0, then no failures are tolerated. And if it's 3, we can tolerate up 
to 3 failures in block writing. If the setting value is empty (it would be 
default), we can tolerate failures up to the number of parity blocks. This is 
just my personal view.





> EC: Add extra redunency configuration in checkStreamerFailures to prevent 
> data loss.
> 
>
> Key: HDFS-17365
> URL: https://issues.apache.org/jira/browse/HDFS-17365
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17358) EC: infinite lease recovery caused by the length of RWR equals to zero.

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820704#comment-17820704
 ] 

ASF GitHub Bot commented on HDFS-17358:
---

hadoop-yetus commented on PR #6509:
URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1964163458

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 197m 23s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/28/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 299m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/28/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1895a134d100 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5c79b1385226049a124730171225d34960ac505b |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/28/testReport/ |
   | Max. process+thread count | 4003 (vs. ulimit of 5500) |
   | modules | 

[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820692#comment-17820692
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

hadoop-yetus commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-1963984682

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 37s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  34m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 133m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6591 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8fa7151b2118 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3a3d077ce1c2d68983743f0bc99ff5de79a2169c |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/1/testReport/ |
   | Max. process+thread count | 668 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 

[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820647#comment-17820647
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

xleoken opened a new pull request, #6591:
URL: https://github.com/apache/hadoop/pull/6591

   
   
   ### Description of PR
   
   When there is a network issue between the client and DN, the write process 
will enter hang state. We hope to choose another DN as soon as possible when 
encountering network problems.
   
   
![hadoop](https://github.com/apache/hadoop/assets/95013770/bf78e1e9-8a96-4f82-be59-d50a27b6882a)
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Choose another DN as soon as possible, when encountering network issues
> ---
>
> Key: HDFS-17397
> URL: https://issues.apache.org/jira/browse/HDFS-17397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xleoken
>Priority: Minor
> Attachments: hadoop.png
>
>
> Choose another DN as soon as possible, when encountering network issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17397:
--
Labels: pull-request-available  (was: )

> Choose another DN as soon as possible, when encountering network issues
> ---
>
> Key: HDFS-17397
> URL: https://issues.apache.org/jira/browse/HDFS-17397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xleoken
>Priority: Minor
>  Labels: pull-request-available
> Attachments: hadoop.png
>
>
> Choose another DN as soon as possible, when encountering network issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread xleoken (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xleoken updated HDFS-17397:
---
Description: Choose another DN as soon as possible, when encountering 
network issues.

> Choose another DN as soon as possible, when encountering network issues
> ---
>
> Key: HDFS-17397
> URL: https://issues.apache.org/jira/browse/HDFS-17397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xleoken
>Priority: Minor
> Attachments: hadoop.png
>
>
> Choose another DN as soon as possible, when encountering network issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread xleoken (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xleoken updated HDFS-17397:
---
Attachment: hadoop.png

> Choose another DN as soon as possible, when encountering network issues
> ---
>
> Key: HDFS-17397
> URL: https://issues.apache.org/jira/browse/HDFS-17397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: xleoken
>Priority: Minor
> Attachments: hadoop.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-02-26 Thread xleoken (Jira)
xleoken created HDFS-17397:
--

 Summary: Choose another DN as soon as possible, when encountering 
network issues
 Key: HDFS-17397
 URL: https://issues.apache.org/jira/browse/HDFS-17397
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: xleoken






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org