[jira] [Commented] (HDFS-17393) Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819961#comment-17819961 ] ASF GitHub Bot commented on HDFS-17393: --- hadoop-yetus commented on PR #6567: URL: https://github.com/apache/hadoop/pull/6567#issuecomment-1960866721 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 0m 22s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 22s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -0 :warning: | checkstyle | 0m 21s | [/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 0m 22s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -1 :x: | spotbugs | 0m 22s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | +1 :green_heart: | shadedclient | 2m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 22s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 22s |
[jira] [Commented] (HDFS-17393) Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819958#comment-17819958 ] ASF GitHub Bot commented on HDFS-17393: --- hadoop-yetus commented on PR #6567: URL: https://github.com/apache/hadoop/pull/6567#issuecomment-1960863981 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 3s | | Docker mode activated. | | -1 :x: | patch | 0m 5s | | https://github.com/apache/hadoop/pull/6567 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6567 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6567/3/console | | versions | git=2.25.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Remove unused cond in FSNamesystem > -- > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17393) Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819955#comment-17819955 ] ASF GitHub Bot commented on HDFS-17393: --- ferhui commented on PR #6567: URL: https://github.com/apache/hadoop/pull/6567#issuecomment-1960858195 @tasanuma @slfan1989 Thanks for reviewing it. Merged. > Remove unused cond in FSNamesystem > -- > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17393) Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei resolved HDFS-17393. Fix Version/s: 3.5.0 Resolution: Fixed > Remove unused cond in FSNamesystem > -- > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17393) Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819954#comment-17819954 ] ASF GitHub Bot commented on HDFS-17393: --- ferhui merged PR #6567: URL: https://github.com/apache/hadoop/pull/6567 > Remove unused cond in FSNamesystem > -- > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17393) Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819953#comment-17819953 ] ASF GitHub Bot commented on HDFS-17393: --- ferhui commented on PR #6567: URL: https://github.com/apache/hadoop/pull/6567#issuecomment-1960854781 @ZanderXu Thanks for pushing it forward. > Remove unused cond in FSNamesystem > -- > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17395) [FGL] Use FSLock to protect ErasureCodingPolicy related operations
[ https://issues.apache.org/jira/browse/HDFS-17395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17395: -- Labels: pull-request-available (was: ) > [FGL] Use FSLock to protect ErasureCodingPolicy related operations > -- > > Key: HDFS-17395 > URL: https://issues.apache.org/jira/browse/HDFS-17395 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > NameNode supports dynamically change ErasureCodingPolicy, so these > ErasureCodingPolicies should be protected by one lock, the current > implementation uses the global lock. > > ErasureCodingPolicy mainly involves directory tree and edits logs, such as: > * getErasureCodingPolicy(String src) > * setErasureCodingPolicy(String src, String ecPolicyName) > * addErasureCodingPolicies(ErasureCodingPolicy[] policies) > * disableErasureCodingPolicy(String ecPolicyName) > * enableErasureCodingPolicy(String ecPolicyName) > So we can use the FSLock to make these operations thread safe. > Another reason why we use the FSLock to protect ErasureCodingPolicy related > operations is that we use FSLock to make edit write operations thread safe. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17395) [FGL] Use FSLock to protect ErasureCodingPolicy related operations
[ https://issues.apache.org/jira/browse/HDFS-17395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819951#comment-17819951 ] ASF GitHub Bot commented on HDFS-17395: --- ZanderXu opened a new pull request, #6579: URL: https://github.com/apache/hadoop/pull/6579 NameNode supports dynamically change ErasureCodingPolicy, so these ErasureCodingPolicies should be protected by one lock, the current implementation uses the global lock. ErasureCodingPolicy mainly involves directory tree and edits logs, such as: - getErasureCodingPolicy(String src) - setErasureCodingPolicy(String src, String ecPolicyName) - addErasureCodingPolicies(ErasureCodingPolicy[] policies) - disableErasureCodingPolicy(String ecPolicyName) - enableErasureCodingPolicy(String ecPolicyName) So we can use the FSLock to make these operations thread safe. Another reason why we use the FSLock to protect ErasureCodingPolicy related operations is that we use FSLock to make edit write operations thread safe. > [FGL] Use FSLock to protect ErasureCodingPolicy related operations > -- > > Key: HDFS-17395 > URL: https://issues.apache.org/jira/browse/HDFS-17395 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > > NameNode supports dynamically change ErasureCodingPolicy, so these > ErasureCodingPolicies should be protected by one lock, the current > implementation uses the global lock. > > ErasureCodingPolicy mainly involves directory tree and edits logs, such as: > * getErasureCodingPolicy(String src) > * setErasureCodingPolicy(String src, String ecPolicyName) > * addErasureCodingPolicies(ErasureCodingPolicy[] policies) > * disableErasureCodingPolicy(String ecPolicyName) > * enableErasureCodingPolicy(String ecPolicyName) > So we can use the FSLock to make these operations thread safe. > Another reason why we use the FSLock to protect ErasureCodingPolicy related > operations is that we use FSLock to make edit write operations thread safe. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17393) [FGL] Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZanderXu updated HDFS-17393: Parent: (was: HDFS-17384) Issue Type: Improvement (was: Sub-task) > [FGL] Remove unused cond in FSNamesystem > > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17393) Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZanderXu updated HDFS-17393: Summary: Remove unused cond in FSNamesystem (was: [FGL] Remove unused cond in FSNamesystem) > Remove unused cond in FSNamesystem > -- > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17393) [FGL] Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819948#comment-17819948 ] ASF GitHub Bot commented on HDFS-17393: --- ZanderXu commented on PR #6567: URL: https://github.com/apache/hadoop/pull/6567#issuecomment-1960840610 Thanks @tasanuma @slfan1989 for your review, let me modify this ticket. > [FGL] Remove unused cond in FSNamesystem > > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15869) Network issue while FSEditLogAsync is executing RpcEdit.logSyncNotify can cause the namenode to hang
[ https://issues.apache.org/jira/browse/HDFS-15869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819947#comment-17819947 ] ASF GitHub Bot commented on HDFS-15869: --- ZanderXu commented on PR #2737: URL: https://github.com/apache/hadoop/pull/2737#issuecomment-1960838297 If I missed some other concerns, please let me know, we can find solutions together to push this ticket forward. > Network issue while FSEditLogAsync is executing RpcEdit.logSyncNotify can > cause the namenode to hang > > > Key: HDFS-15869 > URL: https://issues.apache.org/jira/browse/HDFS-15869 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fs async, namenode >Affects Versions: 3.2.2 >Reporter: Haoze Wu >Assignee: Haoze Wu >Priority: Major > Labels: pull-request-available > Attachments: 1.png, 2.png > > Time Spent: 6.5h > Remaining Estimate: 0h > > We were doing some testing of the latest Hadoop stable release 3.2.2 and > found some network issue can cause the namenode to hang even with the async > edit logging (FSEditLogAsync). > The workflow of the FSEditLogAsync thread is basically: > # get EditLog from a queue (line 229) > # do the transaction (line 232) > # sync the log if doSync (line 243) > # do logSyncNotify (line 248) > {code:java} > //hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogAsync.java > @Override > public void run() { > try { > while (true) { > boolean doSync; > Edit edit = dequeueEdit(); // > line 229 > if (edit != null) { > // sync if requested by edit log. > doSync = edit.logEdit(); // > line 232 > syncWaitQ.add(edit); > } else { > // sync when editq runs dry, but have edits pending a sync. > doSync = !syncWaitQ.isEmpty(); > } > if (doSync) { > // normally edit log exceptions cause the NN to terminate, but tests > // relying on ExitUtil.terminate need to see the exception. > RuntimeException syncEx = null; > try { > logSync(getLastWrittenTxId()); // > line 243 > } catch (RuntimeException ex) { > syncEx = ex; > } > while ((edit = syncWaitQ.poll()) != null) { > edit.logSyncNotify(syncEx);// > line 248 > } > } > } > } catch (InterruptedException ie) { > LOG.info(Thread.currentThread().getName() + " was interrupted, > exiting"); > } catch (Throwable t) { > terminate(t); > } > } > {code} > In terms of the step 4, FSEditLogAsync$RpcEdit.logSyncNotify is > essentially doing some network write (line 365). > {code:java} > //hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogAsync.java > private static class RpcEdit extends Edit { > // ... > @Override > public void logSyncNotify(RuntimeException syncEx) { > try { > if (syncEx == null) { > call.sendResponse(); // line > 365 > } else { > call.abortResponse(syncEx); > } > } catch (Exception e) {} // don't care if not sent. > } > // ... > }{code} > If the sendResponse operation in line 365 gets stuck, then the whole > FSEditLogAsync thread is not able to proceed. In this case, the critical > logSync (line 243) can’t be executed, for the incoming transactions. Then the > namenode hangs. This is undesirable because FSEditLogAsync’s key feature is > asynchronous edit logging that is supposed to tolerate slow I/O. > To see why the sendResponse operation in line 365 may get stuck, here is > the stack trace: > {code:java} > '(org.apache.hadoop.ipc.Server,channelWrite,3593)', > '(org.apache.hadoop.ipc.Server,access$1700,139)', > '(org.apache.hadoop.ipc.Server$Responder,processResponse,1657)', > '(org.apache.hadoop.ipc.Server$Responder,doRespond,1727)', > '(org.apache.hadoop.ipc.Server$Connection,sendResponse,2828)', > '(org.apache.hadoop.ipc.Server$Connection,access$300,1799)', > '(org.apache.hadoop.ipc.Server$RpcCall,doResponse,)', > '(org.apache.hadoop.ipc.Server$Call,doResponse,903)', > '(org.apache.hadoop.ipc.Server$Call,sendResponse,889)', > > '(org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync$RpcEdit,logSyncNotify,365)', > '(org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync,run,248)', > '(java.lang.Thread,run,748)' > {code} > The
[jira] [Commented] (HDFS-15869) Network issue while FSEditLogAsync is executing RpcEdit.logSyncNotify can cause the namenode to hang
[ https://issues.apache.org/jira/browse/HDFS-15869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819939#comment-17819939 ] ASF GitHub Bot commented on HDFS-15869: --- ZanderXu commented on PR #2737: URL: https://github.com/apache/hadoop/pull/2737#issuecomment-1960828209 Thanks for your works and discussions for this problem. I spent a long time to catch your ideas and concerns, it's so hard. I have some throughs and questions about this ticket. Some questions in [HDFS-15869](https://issues.apache.org/jira/browse/HDFS-15869): > The `channel.write(buffer)` operation in line 3594 may be slow. Although for this specific stack trace, the channel is initialized in the non-blocking mode, there is still a chance of being slow depending on native write implementation in the OS (e.g., a kernel issue). Furthermore, the channelIO invocation in line 3594 may also get stuck, since it waits until the buffer is drained: `ret = (readCh == null) ? writeCh.write(buf) : readCh.read(buf);` will return 0 if namenode cannot write more data to this connection, right? `RpcEdit,logSyncNotify` will add this response into the queue of this connection and let the Responder to take this job, right? So FSEditLogAsync can go to process the next jobs, right? Some throughs in [HDFS-15869](https://issues.apache.org/jira/browse/HDFS-15869): Actually, I encountered this problems in our prod environment that the thread `FSEditLogAsync` spends a little more time to send responses to clients, which had a big performance impact on writing edits to JNs. So I just use a new single thread to do these send response jobs. Of course, we can use multiple threads to send responses to client. New task is very expensive, so we use a producer-consumer mode to fix this problem. - FSEditLogAsync just put task into a capacity blocking Queue. - ResponseSender thread take tasks from the Queue and send them to clients. About "Bug" or "Improvement", I think it should be a performance improvement, since all processes are worked as expected, no blocking or hanging, just slow. Some throughs in [HDFS-15957](https://issues.apache.org/jira/browse/HDFS-15957): - I think namenode should directly close this connection if IOException happens in `RpcEdit,logSyncNotify`, since we cannot let the client hang forever. It seems that the namenode drops a request. @functioner Looking forward your ideas and confirm. @daryn-sharp @Hexiaoqiao @linyiqun @amahussein Looking forward your ideas. I hope we can push this ticket forward. > Network issue while FSEditLogAsync is executing RpcEdit.logSyncNotify can > cause the namenode to hang > > > Key: HDFS-15869 > URL: https://issues.apache.org/jira/browse/HDFS-15869 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fs async, namenode >Affects Versions: 3.2.2 >Reporter: Haoze Wu >Assignee: Haoze Wu >Priority: Major > Labels: pull-request-available > Attachments: 1.png, 2.png > > Time Spent: 6.5h > Remaining Estimate: 0h > > We were doing some testing of the latest Hadoop stable release 3.2.2 and > found some network issue can cause the namenode to hang even with the async > edit logging (FSEditLogAsync). > The workflow of the FSEditLogAsync thread is basically: > # get EditLog from a queue (line 229) > # do the transaction (line 232) > # sync the log if doSync (line 243) > # do logSyncNotify (line 248) > {code:java} > //hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogAsync.java > @Override > public void run() { > try { > while (true) { > boolean doSync; > Edit edit = dequeueEdit(); // > line 229 > if (edit != null) { > // sync if requested by edit log. > doSync = edit.logEdit(); // > line 232 > syncWaitQ.add(edit); > } else { > // sync when editq runs dry, but have edits pending a sync. > doSync = !syncWaitQ.isEmpty(); > } > if (doSync) { > // normally edit log exceptions cause the NN to terminate, but tests > // relying on ExitUtil.terminate need to see the exception. > RuntimeException syncEx = null; > try { > logSync(getLastWrittenTxId()); // > line 243 > } catch (RuntimeException ex) { > syncEx = ex; > } > while ((edit = syncWaitQ.poll()) != null) { > edit.logSyncNotify(syncEx);
[jira] [Commented] (HDFS-17390) [FGL] FSDirectory supports this fine-grained locking
[ https://issues.apache.org/jira/browse/HDFS-17390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819899#comment-17819899 ] ASF GitHub Bot commented on HDFS-17390: --- hadoop-yetus commented on PR #6573: URL: https://github.com/apache/hadoop/pull/6573#issuecomment-1960700492 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ HDFS-17384 Compile Tests _ | | -1 :x: | mvninstall | 0m 21s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/branch-mvninstall-root.txt) | root in HDFS-17384 failed. | | -1 :x: | compile | 0m 22s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 22s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -0 :warning: | checkstyle | 0m 20s | [/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 0m 22s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in HDFS-17384 failed. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -1 :x: | spotbugs | 0m 22s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in HDFS-17384 failed. | | +1 :green_heart: | shadedclient | 2m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 22s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 22s |
[jira] [Commented] (HDFS-17390) [FGL] FSDirectory supports this fine-grained locking
[ https://issues.apache.org/jira/browse/HDFS-17390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819892#comment-17819892 ] ASF GitHub Bot commented on HDFS-17390: --- hadoop-yetus commented on PR #6573: URL: https://github.com/apache/hadoop/pull/6573#issuecomment-1960683405 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ HDFS-17384 Compile Tests _ | | -1 :x: | mvninstall | 0m 22s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/branch-mvninstall-root.txt) | root in HDFS-17384 failed. | | -1 :x: | compile | 0m 23s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 22s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -0 :warning: | checkstyle | 0m 20s | [/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 0m 22s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in HDFS-17384 failed. | | -1 :x: | javadoc | 0m 23s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in HDFS-17384 failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -1 :x: | spotbugs | 0m 22s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in HDFS-17384 failed. | | +1 :green_heart: | shadedclient | 2m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 21s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 22s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 22s |
[jira] [Assigned] (HDFS-17384) [FGL] Replace the global lock with global FS Lock and global BM lock
[ https://issues.apache.org/jira/browse/HDFS-17384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei reassigned HDFS-17384: -- Assignee: ZanderXu > [FGL] Replace the global lock with global FS Lock and global BM lock > > > Key: HDFS-17384 > URL: https://issues.apache.org/jira/browse/HDFS-17384 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: FGL > > First, we can replace the current global lock with two locks, global FS lock > and global BM lock. > The global FS lock is used to make directory tree-related operations > thread-safe. > The global BM lock is used to make block-related operations and DN-related > operations thread-safe. > > For some operations involving both directory tree and block or DN, the global > FS lock and the global BM lock are acquired. > > The lock order should be: > * The global FS lock > * The global BM lock > > There are some special requirements for this ticket. > * End-user can choose to use global lock or fine-grained lock through > configuration. > * Try not to modify the current implementation logic as much as possible. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819858#comment-17819858 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6566: URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1960584739 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/7/console in case of problems. > HDFS is not rack failure tolerant while creating a new file. > > > Key: HDFS-17299 > URL: https://issues.apache.org/jira/browse/HDFS-17299 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.10.1 >Reporter: Rushabh Shah >Assignee: Ritesh >Priority: Critical > Labels: pull-request-available > Attachments: repro.patch > > > Recently we saw an HBase cluster outage when we mistakenly brought down 1 AZ. > Our configuration: > 1. We use 3 Availability Zones (AZs) for fault tolerance. > 2. We use BlockPlacementPolicyRackFaultTolerant as the block placement policy. > 3. We use the following configuration parameters: > dfs.namenode.heartbeat.recheck-interval: 60 > dfs.heartbeat.interval: 3 > So it will take 123 ms (20.5mins) to detect that datanode is dead. > > Steps to reproduce: > # Bring down 1 AZ. > # HBase (HDFS client) tries to create a file (WAL file) and then calls > hflush on the newly created file. > # DataStreamer is not able to find blocks locations that satisfies the rack > placement policy (one copy in each rack which essentially means one copy in > each AZ) > # Since all the datanodes in that AZ are down but still alive to namenode, > the client gets different datanodes but still all of them are in the same AZ. > See logs below. > # HBase is not able to create a WAL file and it aborts the region server. > > Relevant logs from hdfs client and namenode > > {noformat} > 2023-12-16 17:17:43,818 INFO [on default port 9000] FSNamesystem.audit - > allowed=trueugi=hbase/ (auth:KERBEROS) ip= > cmd=create src=/hbase/WALs/ dst=null > 2023-12-16 17:17:43,978 INFO [on default port 9000] hdfs.StateChange - > BLOCK* allocate blk_1214652565_140946716, replicas=:50010, > :50010, :50010 for /hbase/WALs/ > 2023-12-16 17:17:44,061 INFO [Thread-39087] hdfs.DataStreamer - Exception in > createBlockOutputStream > java.io.IOException: Got error, status=ERROR, status message , ack with > firstBadLink as :50010 > at > org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:113) > at > org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1747) > at > org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1651) > at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:715) > 2023-12-16 17:17:44,061 WARN [Thread-39087] hdfs.DataStreamer - Abandoning > BP-179318874--1594838129323:blk_1214652565_140946716 > 2023-12-16 17:17:44,179 WARN [Thread-39087] hdfs.DataStreamer - Excluding > datanode > DatanodeInfoWithStorage[:50010,DS-a493abdb-3ac3-49b1-9bfb-848baf5c1c2c,DISK] > 2023-12-16 17:17:44,339 INFO [on default port 9000] hdfs.StateChange - > BLOCK* allocate blk_1214652580_140946764, replicas=:50010, > :50010, :50010 for /hbase/WALs/ > 2023-12-16 17:17:44,369 INFO [Thread-39087] hdfs.DataStreamer - Exception in > createBlockOutputStream > java.io.IOException: Got error, status=ERROR, status message , ack with > firstBadLink as :50010 > at > org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:113) > at > org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1747) > at > org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1651) > at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:715) > 2023-12-16 17:17:44,369 WARN [Thread-39087] hdfs.DataStreamer - Abandoning > BP-179318874-NN-IP-1594838129323:blk_1214652580_140946764 > 2023-12-16 17:17:44,454 WARN [Thread-39087] hdfs.DataStreamer - Excluding > datanode > DatanodeInfoWithStorage[AZ-2-dn-2:50010,DS-46bb45cc-af89-46f3-9f9d-24e4fdc35b6d,DISK] > 2023-12-16 17:17:44,522 INFO [on default port 9000] hdfs.StateChange - > BLOCK* allocate blk_1214652594_140946796, replicas=:50010, > :50010, :50010 for /hbase/WALs/ > 2023-12-16 17:17:44,712 INFO [Thread-39087] hdfs.DataStreamer - Exception in > createBlockOutputStream > java.io.IOException: Got error, status=ERROR, status message , ack with > firstBadLink as :50010 > at >
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819825#comment-17819825 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6566: URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1960345982 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 39s | | trunk passed | | +1 :green_heart: | compile | 17m 21s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 6s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 23s | | trunk passed | | +1 :green_heart: | javadoc | 3m 1s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 28s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 45s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 2m 45s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/6/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html) | hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | shadedclient | 5m 4s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 17s | | the patch passed | | +1 :green_heart: | compile | 17m 1s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 1s | | the patch passed | | +1 :green_heart: | compile | 15m 58s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 15m 58s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/6/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 19s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 17 new + 243 unchanged - 2 fixed = 260 total (was 245) | | +1 :green_heart: | mvnsite | 3m 26s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 2m 53s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 31s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 39s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 42m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 33s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 3m 0s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 17m 33s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 54s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/6/artifact/out/results-asflicense.txt) | The patch generated 44 ASF
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819741#comment-17819741 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6566: URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1959924123 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/6/console in case of problems. > HDFS is not rack failure tolerant while creating a new file. > > > Key: HDFS-17299 > URL: https://issues.apache.org/jira/browse/HDFS-17299 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.10.1 >Reporter: Rushabh Shah >Assignee: Ritesh >Priority: Critical > Labels: pull-request-available > Attachments: repro.patch > > > Recently we saw an HBase cluster outage when we mistakenly brought down 1 AZ. > Our configuration: > 1. We use 3 Availability Zones (AZs) for fault tolerance. > 2. We use BlockPlacementPolicyRackFaultTolerant as the block placement policy. > 3. We use the following configuration parameters: > dfs.namenode.heartbeat.recheck-interval: 60 > dfs.heartbeat.interval: 3 > So it will take 123 ms (20.5mins) to detect that datanode is dead. > > Steps to reproduce: > # Bring down 1 AZ. > # HBase (HDFS client) tries to create a file (WAL file) and then calls > hflush on the newly created file. > # DataStreamer is not able to find blocks locations that satisfies the rack > placement policy (one copy in each rack which essentially means one copy in > each AZ) > # Since all the datanodes in that AZ are down but still alive to namenode, > the client gets different datanodes but still all of them are in the same AZ. > See logs below. > # HBase is not able to create a WAL file and it aborts the region server. > > Relevant logs from hdfs client and namenode > > {noformat} > 2023-12-16 17:17:43,818 INFO [on default port 9000] FSNamesystem.audit - > allowed=trueugi=hbase/ (auth:KERBEROS) ip= > cmd=create src=/hbase/WALs/ dst=null > 2023-12-16 17:17:43,978 INFO [on default port 9000] hdfs.StateChange - > BLOCK* allocate blk_1214652565_140946716, replicas=:50010, > :50010, :50010 for /hbase/WALs/ > 2023-12-16 17:17:44,061 INFO [Thread-39087] hdfs.DataStreamer - Exception in > createBlockOutputStream > java.io.IOException: Got error, status=ERROR, status message , ack with > firstBadLink as :50010 > at > org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:113) > at > org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1747) > at > org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1651) > at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:715) > 2023-12-16 17:17:44,061 WARN [Thread-39087] hdfs.DataStreamer - Abandoning > BP-179318874--1594838129323:blk_1214652565_140946716 > 2023-12-16 17:17:44,179 WARN [Thread-39087] hdfs.DataStreamer - Excluding > datanode > DatanodeInfoWithStorage[:50010,DS-a493abdb-3ac3-49b1-9bfb-848baf5c1c2c,DISK] > 2023-12-16 17:17:44,339 INFO [on default port 9000] hdfs.StateChange - > BLOCK* allocate blk_1214652580_140946764, replicas=:50010, > :50010, :50010 for /hbase/WALs/ > 2023-12-16 17:17:44,369 INFO [Thread-39087] hdfs.DataStreamer - Exception in > createBlockOutputStream > java.io.IOException: Got error, status=ERROR, status message , ack with > firstBadLink as :50010 > at > org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:113) > at > org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1747) > at > org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1651) > at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:715) > 2023-12-16 17:17:44,369 WARN [Thread-39087] hdfs.DataStreamer - Abandoning > BP-179318874-NN-IP-1594838129323:blk_1214652580_140946764 > 2023-12-16 17:17:44,454 WARN [Thread-39087] hdfs.DataStreamer - Excluding > datanode > DatanodeInfoWithStorage[AZ-2-dn-2:50010,DS-46bb45cc-af89-46f3-9f9d-24e4fdc35b6d,DISK] > 2023-12-16 17:17:44,522 INFO [on default port 9000] hdfs.StateChange - > BLOCK* allocate blk_1214652594_140946796, replicas=:50010, > :50010, :50010 for /hbase/WALs/ > 2023-12-16 17:17:44,712 INFO [Thread-39087] hdfs.DataStreamer - Exception in > createBlockOutputStream > java.io.IOException: Got error, status=ERROR, status message , ack with > firstBadLink as :50010 > at >
[jira] [Commented] (HDFS-17393) [FGL] Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819669#comment-17819669 ] ASF GitHub Bot commented on HDFS-17393: --- slfan1989 commented on PR #6567: URL: https://github.com/apache/hadoop/pull/6567#issuecomment-1959560465 > I'm really looking forward to FGL. However, I suggest we remove 'FGL' from the title and the commit comment as this change is a regular refactoring and not only related to FGL. @tasanuma I agree with your point, and I greatly appreciate @ZanderXu contribution; it's incredibly valuable. I have a suggestion: could we complete some of the description information? > [FGL] Remove unused cond in FSNamesystem > > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17393) [FGL] Remove unused cond in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-17393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819668#comment-17819668 ] ASF GitHub Bot commented on HDFS-17393: --- tasanuma commented on PR #6567: URL: https://github.com/apache/hadoop/pull/6567#issuecomment-1959552424 I'm really looking forward to FGL. However, I suggest we remove 'FGL' from the title and the commit comment as this change is a regular refactoring and not only related to FGL. > [FGL] Remove unused cond in FSNamesystem > > > Key: HDFS-17393 > URL: https://issues.apache.org/jira/browse/HDFS-17393 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > The `cond` in FSNamesystem is unused, but it may blocks Fine-grained locking, > so we need to remove it first. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.
[ https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819629#comment-17819629 ] ASF GitHub Bot commented on HDFS-17299: --- hadoop-yetus commented on PR #6566: URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1959333777 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 5m 14s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 11s | | trunk passed | | +1 :green_heart: | compile | 9m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 8m 42s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 2m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 5s | | trunk passed | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 15s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 33s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 1m 33s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html) | hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | shadedclient | 2m 35s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 9m 1s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 9m 1s | | the patch passed | | +1 :green_heart: | compile | 8m 41s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 8m 41s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 2m 9s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/5/artifact/out/results-checkstyle-root.txt) | root: The patch generated 17 new + 243 unchanged - 2 fixed = 260 total (was 245) | | +1 :green_heart: | mvnsite | 2m 2s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 1m 47s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 24s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 28s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 22m 45s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 17s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 49s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 219m 2s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 347m 35s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.protocol.TestBlockListAsLongs | | | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor | | | hadoop.hdfs.tools.TestDFSAdmin | | |
[jira] [Commented] (HDFS-17352) Add configuration to control whether DN delete this replica from disk when client requests a missing block
[ https://issues.apache.org/jira/browse/HDFS-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819597#comment-17819597 ] ASF GitHub Bot commented on HDFS-17352: --- haiyang1987 commented on PR #6559: URL: https://github.com/apache/hadoop/pull/6559#issuecomment-1959255049 Update PR to support dynamically configured. Hi @ZanderXu @tomscut please help review it again, thanks~ > Add configuration to control whether DN delete this replica from disk when > client requests a missing block > --- > > Key: HDFS-17352 > URL: https://issues.apache.org/jira/browse/HDFS-17352 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > As discussed at > https://github.com/apache/hadoop/pull/6464#issuecomment-1902959898 > we should add configuration to control whether DN delete this replica from > disk when client requests a missing block. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17390) [FGL] FSDirectory supports this fine-grained locking
[ https://issues.apache.org/jira/browse/HDFS-17390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819541#comment-17819541 ] ASF GitHub Bot commented on HDFS-17390: --- hadoop-yetus commented on PR #6573: URL: https://github.com/apache/hadoop/pull/6573#issuecomment-195949 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ HDFS-17384 Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 57s | | HDFS-17384 passed | | +1 :green_heart: | compile | 1m 22s | | HDFS-17384 passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 17s | | HDFS-17384 passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 22s | | HDFS-17384 passed | | +1 :green_heart: | mvnsite | 1m 28s | | HDFS-17384 passed | | +1 :green_heart: | javadoc | 1m 10s | | HDFS-17384 passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 44s | | HDFS-17384 passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 26s | | HDFS-17384 passed | | +1 :green_heart: | shadedclient | 38m 45s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 24s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 24s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 24s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 24s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -1 :x: | javac | 0m 24s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 4s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 49 new + 353 unchanged - 8 fixed = 402 total (was 361) | | -1 :x: | mvnsite | 0m 25s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6573/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | javadoc | 0m 23s |
[jira] [Commented] (HDFS-17387) [FGL] Abstract selectable locking mode
[ https://issues.apache.org/jira/browse/HDFS-17387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819520#comment-17819520 ] ASF GitHub Bot commented on HDFS-17387: --- hadoop-yetus commented on PR #6572: URL: https://github.com/apache/hadoop/pull/6572#issuecomment-1958930537 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ HDFS-17384 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 19s | | HDFS-17384 passed | | +1 :green_heart: | compile | 0m 40s | | HDFS-17384 passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 39s | | HDFS-17384 passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | HDFS-17384 passed | | +1 :green_heart: | mvnsite | 0m 47s | | HDFS-17384 passed | | +1 :green_heart: | javadoc | 0m 42s | | HDFS-17384 passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 16s | | HDFS-17384 passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 5s | | HDFS-17384 passed | | +1 :green_heart: | shadedclient | 22m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 16s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6572/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6572/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6572/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6572/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -1 :x: | javac | 0m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6572/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 36s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6572/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 49 new + 353 unchanged - 8 fixed = 402 total (was 361) | | -1 :x: | mvnsite | 0m 17s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6572/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | javadoc | 0m 14s |