[jira] [Commented] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175225#comment-17175225 ] Xieming Li commented on HDFS-15507: --- [~aajisaka], Thank you for the review. > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Fix For: 3.4.0 > > Attachments: HDFS-15507.001.patch, HDFS-15507.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-15507: - Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thank you [~risyomei]! > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Fix For: 3.4.0 > > Attachments: HDFS-15507.001.patch, HDFS-15507.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175216#comment-17175216 ] Akira Ajisaka commented on HDFS-15507: -- +1 > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15507.001.patch, HDFS-15507.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175174#comment-17175174 ] Xieming Li edited comment on HDFS-15507 at 8/11/20, 3:43 AM: - I have re-uploaded HDFS-15507.002.patch with same content because HDFS-15507.001.patch did not trigger the jenkins. was (Author: risyomei): I have re-uploaded HDFS-15507.002.patch with same content as HDFS-15507.001.patch did not trigger the jenkins. > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15507.001.patch, HDFS-15507.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175199#comment-17175199 ] Hadoop QA commented on HDFS-15507: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 23s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} |
[jira] [Updated] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xieming Li updated HDFS-15507: -- Attachment: HDFS-15507.002.patch Status: Patch Available (was: Open) I have re-uploaded HDFS-15507.002.patch with same content as HDFS-15507.001.patch did not trigger the jenkins. > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15507.001.patch, HDFS-15507.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xieming Li updated HDFS-15507: -- Status: Open (was: Patch Available) > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15507.001.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15496) Add UI for deleted snapshots
[ https://issues.apache.org/jira/browse/HDFS-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vivek Ratnavel Subramanian updated HDFS-15496: -- Status: Patch Available (was: In Progress) > Add UI for deleted snapshots > > > Key: HDFS-15496 > URL: https://issues.apache.org/jira/browse/HDFS-15496 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mukul Kumar Singh >Assignee: Vivek Ratnavel Subramanian >Priority: Major > > Add UI for deleted snapshots > a) Show the list of snapshots per snapshottable directory > b) Add deleted status in the JMX output for the Snapshot along with a snap ID > e) NN UI, should sort the snapshots for snapIds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-15496) Add UI for deleted snapshots
[ https://issues.apache.org/jira/browse/HDFS-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-15496 started by Vivek Ratnavel Subramanian. - > Add UI for deleted snapshots > > > Key: HDFS-15496 > URL: https://issues.apache.org/jira/browse/HDFS-15496 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mukul Kumar Singh >Assignee: Vivek Ratnavel Subramanian >Priority: Major > > Add UI for deleted snapshots > a) Show the list of snapshots per snapshottable directory > b) Add deleted status in the JMX output for the Snapshot along with a snap ID > e) NN UI, should sort the snapshots for snapIds. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15506) [JDK 11] Fix javadoc errors in hadoop-hdfs module
[ https://issues.apache.org/jira/browse/HDFS-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175163#comment-17175163 ] Xieming Li commented on HDFS-15506: --- [~aajisaka], thank you for the review. > [JDK 11] Fix javadoc errors in hadoop-hdfs module > - > > Key: HDFS-15506 > URL: https://issues.apache.org/jira/browse/HDFS-15506 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Fix For: 3.4.0 > > Attachments: HDFS-15506.001.patch, HDFS-15506.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java:43: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:682: > error: malformed HTML > [ERROR]* a NameNode per second. Values <= 0 disable throttling. This > affects > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:1780: > error: exception not thrown: java.io.FileNotFoundException > [ERROR]* @throws FileNotFoundException > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java:176: > error: @param name not found > [ERROR]* @param mtime The snapshot creation time set by Time.now(). > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:2187: > error: exception not thrown: java.lang.Exception > [ERROR]* @exception Exception if the filesystem does not exist. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a0c16f0408a623e798dd7df29fbddf82 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15439) Setting dfs.mover.retry.max.attempts to negative value will retry forever.
[ https://issues.apache.org/jira/browse/HDFS-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] AMC-team updated HDFS-15439: Attachment: HDFS-15439.002.patch > Setting dfs.mover.retry.max.attempts to negative value will retry forever. > -- > > Key: HDFS-15439 > URL: https://issues.apache.org/jira/browse/HDFS-15439 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer mover >Reporter: AMC-team >Priority: Major > Attachments: HDFS-15439.000.patch, HDFS-15439.001.patch, > HDFS-15439.002.patch > > > Configuration parameter "dfs.mover.retry.max.attempts" is to define the > maximum number of retries before the mover consider the move failed. There is > no checking code so this parameter can accept any int value. > Theoratically, setting this value to <=0 should mean that no retry at all. > However, if you set the value to negative value. The checking condition for > retry failed will never satisfied because the if statement is "*if > (retryCount.get() == retryMaxAttempts)*". The retry count will always +1 by > retryCount.incrementAndGet() after failed but never *=* *retryMaxAttempts.* > {code:java} > private Result processNamespace() throws IOException { > ... //wait for pending move to finish and retry the failed migration > if (hasFailed && !hasSuccess) { > if (retryCount.get() == retryMaxAttempts) { > result.setRetryFailed(); > LOG.error("Failed to move some block's after " > + retryMaxAttempts + " retries."); > return result; > } else { > retryCount.incrementAndGet(); > } > } else { > // Reset retry count if no failure. > retryCount.set(0); > } > ... > } > {code} > *How to fix* > Add checking code of "dfs.mover.retry.max.attempts" to accept only > non-negative value or change the if statement condition when retry count > exceeds max attempts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15439) Setting dfs.mover.retry.max.attempts to negative value will retry forever.
[ https://issues.apache.org/jira/browse/HDFS-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175158#comment-17175158 ] AMC-team commented on HDFS-15439: - upload the new patch > Setting dfs.mover.retry.max.attempts to negative value will retry forever. > -- > > Key: HDFS-15439 > URL: https://issues.apache.org/jira/browse/HDFS-15439 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer mover >Reporter: AMC-team >Priority: Major > Attachments: HDFS-15439.000.patch, HDFS-15439.001.patch, > HDFS-15439.002.patch > > > Configuration parameter "dfs.mover.retry.max.attempts" is to define the > maximum number of retries before the mover consider the move failed. There is > no checking code so this parameter can accept any int value. > Theoratically, setting this value to <=0 should mean that no retry at all. > However, if you set the value to negative value. The checking condition for > retry failed will never satisfied because the if statement is "*if > (retryCount.get() == retryMaxAttempts)*". The retry count will always +1 by > retryCount.incrementAndGet() after failed but never *=* *retryMaxAttempts.* > {code:java} > private Result processNamespace() throws IOException { > ... //wait for pending move to finish and retry the failed migration > if (hasFailed && !hasSuccess) { > if (retryCount.get() == retryMaxAttempts) { > result.setRetryFailed(); > LOG.error("Failed to move some block's after " > + retryMaxAttempts + " retries."); > return result; > } else { > retryCount.incrementAndGet(); > } > } else { > // Reset retry count if no failure. > retryCount.set(0); > } > ... > } > {code} > *How to fix* > Add checking code of "dfs.mover.retry.max.attempts" to accept only > non-negative value or change the if statement condition when retry count > exceeds max attempts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15439) Setting dfs.mover.retry.max.attempts to negative value will retry forever.
[ https://issues.apache.org/jira/browse/HDFS-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] AMC-team updated HDFS-15439: Attachment: (was: HDFS-15439.002.patch) > Setting dfs.mover.retry.max.attempts to negative value will retry forever. > -- > > Key: HDFS-15439 > URL: https://issues.apache.org/jira/browse/HDFS-15439 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer mover >Reporter: AMC-team >Priority: Major > Attachments: HDFS-15439.000.patch, HDFS-15439.001.patch > > > Configuration parameter "dfs.mover.retry.max.attempts" is to define the > maximum number of retries before the mover consider the move failed. There is > no checking code so this parameter can accept any int value. > Theoratically, setting this value to <=0 should mean that no retry at all. > However, if you set the value to negative value. The checking condition for > retry failed will never satisfied because the if statement is "*if > (retryCount.get() == retryMaxAttempts)*". The retry count will always +1 by > retryCount.incrementAndGet() after failed but never *=* *retryMaxAttempts.* > {code:java} > private Result processNamespace() throws IOException { > ... //wait for pending move to finish and retry the failed migration > if (hasFailed && !hasSuccess) { > if (retryCount.get() == retryMaxAttempts) { > result.setRetryFailed(); > LOG.error("Failed to move some block's after " > + retryMaxAttempts + " retries."); > return result; > } else { > retryCount.incrementAndGet(); > } > } else { > // Reset retry count if no failure. > retryCount.set(0); > } > ... > } > {code} > *How to fix* > Add checking code of "dfs.mover.retry.max.attempts" to accept only > non-negative value or change the if statement condition when retry count > exceeds max attempts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15506) [JDK 11] Fix javadoc errors in hadoop-hdfs module
[ https://issues.apache.org/jira/browse/HDFS-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-15506: - Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thank you [~risyomei] for your contribution. > [JDK 11] Fix javadoc errors in hadoop-hdfs module > - > > Key: HDFS-15506 > URL: https://issues.apache.org/jira/browse/HDFS-15506 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Fix For: 3.4.0 > > Attachments: HDFS-15506.001.patch, HDFS-15506.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java:43: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:682: > error: malformed HTML > [ERROR]* a NameNode per second. Values <= 0 disable throttling. This > affects > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:1780: > error: exception not thrown: java.io.FileNotFoundException > [ERROR]* @throws FileNotFoundException > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java:176: > error: @param name not found > [ERROR]* @param mtime The snapshot creation time set by Time.now(). > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:2187: > error: exception not thrown: java.lang.Exception > [ERROR]* @exception Exception if the filesystem does not exist. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a0c16f0408a623e798dd7df29fbddf82 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175152#comment-17175152 ] liusheng commented on HDFS-15098: - Because this issue's patch is too big and has run CI many times, for convenience to review, I have also submitted a PR in github repo: [https://github.com/apache/hadoop/pull/2211] > Add SM4 encryption method for HDFS > -- > > Key: HDFS-15098 > URL: https://issues.apache.org/jira/browse/HDFS-15098 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: liusheng >Assignee: liusheng >Priority: Major > Labels: sm4 > Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, > HDFS-15098.003.patch, HDFS-15098.004.patch, HDFS-15098.005.patch, > HDFS-15098.006.patch, HDFS-15098.007.patch, HDFS-15098.008.patch, > HDFS-15098.009.patch > > > SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard > for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). > SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far > been rejected by ISO. One of the reasons for the rejection has been > opposition to the WAPI fast-track proposal by the IEEE. please see: > [https://en.wikipedia.org/wiki/SM4_(cipher)] > > *Use sm4 on hdfs as follows:* > 1.Configure Hadoop KMS > 2.test HDFS sm4 > hadoop key create key1 -cipher 'SM4/CTR/NoPadding' > hdfs dfs -mkdir /benchmarks > hdfs crypto -createZone -keyName key1 -path /benchmarks > *requires:* > 1.openssl version >=1.1.1 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15506) [JDK 11] Fix javadoc errors in hadoop-hdfs module
[ https://issues.apache.org/jira/browse/HDFS-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175151#comment-17175151 ] Akira Ajisaka commented on HDFS-15506: -- +1, javadoc succeded after the patch: https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/48/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt > [JDK 11] Fix javadoc errors in hadoop-hdfs module > - > > Key: HDFS-15506 > URL: https://issues.apache.org/jira/browse/HDFS-15506 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15506.001.patch, HDFS-15506.002.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java:43: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:682: > error: malformed HTML > [ERROR]* a NameNode per second. Values <= 0 disable throttling. This > affects > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:1780: > error: exception not thrown: java.io.FileNotFoundException > [ERROR]* @throws FileNotFoundException > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java:176: > error: @param name not found > [ERROR]* @param mtime The snapshot creation time set by Time.now(). > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:2187: > error: exception not thrown: java.lang.Exception > [ERROR]* @exception Exception if the filesystem does not exist. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a0c16f0408a623e798dd7df29fbddf82 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15439) Setting dfs.mover.retry.max.attempts to negative value will retry forever.
[ https://issues.apache.org/jira/browse/HDFS-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175150#comment-17175150 ] AMC-team commented on HDFS-15439: - upload the new patch > Setting dfs.mover.retry.max.attempts to negative value will retry forever. > -- > > Key: HDFS-15439 > URL: https://issues.apache.org/jira/browse/HDFS-15439 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer mover >Reporter: AMC-team >Priority: Major > Attachments: HDFS-15439.000.patch, HDFS-15439.001.patch, > HDFS-15439.002.patch > > > Configuration parameter "dfs.mover.retry.max.attempts" is to define the > maximum number of retries before the mover consider the move failed. There is > no checking code so this parameter can accept any int value. > Theoratically, setting this value to <=0 should mean that no retry at all. > However, if you set the value to negative value. The checking condition for > retry failed will never satisfied because the if statement is "*if > (retryCount.get() == retryMaxAttempts)*". The retry count will always +1 by > retryCount.incrementAndGet() after failed but never *=* *retryMaxAttempts.* > {code:java} > private Result processNamespace() throws IOException { > ... //wait for pending move to finish and retry the failed migration > if (hasFailed && !hasSuccess) { > if (retryCount.get() == retryMaxAttempts) { > result.setRetryFailed(); > LOG.error("Failed to move some block's after " > + retryMaxAttempts + " retries."); > return result; > } else { > retryCount.incrementAndGet(); > } > } else { > // Reset retry count if no failure. > retryCount.set(0); > } > ... > } > {code} > *How to fix* > Add checking code of "dfs.mover.retry.max.attempts" to accept only > non-negative value or change the if statement condition when retry count > exceeds max attempts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-15439) Setting dfs.mover.retry.max.attempts to negative value will retry forever.
[ https://issues.apache.org/jira/browse/HDFS-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] AMC-team updated HDFS-15439: Comment: was deleted (was: upload the new patch) > Setting dfs.mover.retry.max.attempts to negative value will retry forever. > -- > > Key: HDFS-15439 > URL: https://issues.apache.org/jira/browse/HDFS-15439 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer mover >Reporter: AMC-team >Priority: Major > Attachments: HDFS-15439.000.patch, HDFS-15439.001.patch, > HDFS-15439.002.patch > > > Configuration parameter "dfs.mover.retry.max.attempts" is to define the > maximum number of retries before the mover consider the move failed. There is > no checking code so this parameter can accept any int value. > Theoratically, setting this value to <=0 should mean that no retry at all. > However, if you set the value to negative value. The checking condition for > retry failed will never satisfied because the if statement is "*if > (retryCount.get() == retryMaxAttempts)*". The retry count will always +1 by > retryCount.incrementAndGet() after failed but never *=* *retryMaxAttempts.* > {code:java} > private Result processNamespace() throws IOException { > ... //wait for pending move to finish and retry the failed migration > if (hasFailed && !hasSuccess) { > if (retryCount.get() == retryMaxAttempts) { > result.setRetryFailed(); > LOG.error("Failed to move some block's after " > + retryMaxAttempts + " retries."); > return result; > } else { > retryCount.incrementAndGet(); > } > } else { > // Reset retry count if no failure. > retryCount.set(0); > } > ... > } > {code} > *How to fix* > Add checking code of "dfs.mover.retry.max.attempts" to accept only > non-negative value or change the if statement condition when retry count > exceeds max attempts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15508) [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module
[ https://issues.apache.org/jira/browse/HDFS-15508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-15508: - Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanks [~ayushtkn] for your review. > [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module > - > > Key: HDFS-15508 > URL: https://issues.apache.org/jira/browse/HDFS-15508 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie > Fix For: 3.4.0 > > Attachments: HDFS-15508.01.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java:21: > error: reference not found > [ERROR] * Implementations should extend {@link > AbstractDelegationTokenSecretManager}. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a7dde76a4ba2942f60bf6230ec9ed6e1 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15515) mkdirs on fallback should throw IOE out instead of suppressing and returning false
[ https://issues.apache.org/jira/browse/HDFS-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175090#comment-17175090 ] Hadoop QA commented on HDFS-15515: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 1s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 26s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 39s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 8s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 9s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 49s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 52s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit
[jira] [Created] (HDFS-15522) Use snapshot diff to build file listing when copying to blob storage
John Liu created HDFS-15522: --- Summary: Use snapshot diff to build file listing when copying to blob storage Key: HDFS-15522 URL: https://issues.apache.org/jira/browse/HDFS-15522 Project: Hadoop HDFS Issue Type: Improvement Components: distcp Reporter: John Liu The DistCp sync option should be extensible for copying to blob storage, which is not a distributed filesystem. Clients of DistCp could benefit from using the HDFS snapshot diff report to create the file listing in less time. A valid use case is to copy new files added to HDFS to a remote blob storage. The client ensures all new files are copied over but does not require the destination filesystem to be a distributed filesystem or have the previous snapshot. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15515) mkdirs on fallback should throw IOE out instead of suppressing and returning false
[ https://issues.apache.org/jira/browse/HDFS-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174953#comment-17174953 ] Uma Maheswara Rao G commented on HDFS-15515: [~ste...@apache.org] , I noticed ur comments above. Thank you for the comments. [~ayushtkn] , [~ste...@apache.org] , [~Hemanth Boyina] , thanks for review!. > mkdirs on fallback should throw IOE out instead of suppressing and returning > false > -- > > Key: HDFS-15515 > URL: https://issues.apache.org/jira/browse/HDFS-15515 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > > Currently when doing mkdirs on fallback dir, we catching IOE and returning > false. > I think we should just throw IOE out as the fs#mkdirs throws IOE out. > I noticed a case when we attempt to create .reserved dirs, NN throws > HadoopIAE. > But we will catch and return false. Here exception should be thrown out. > {code:java} > try { > return linkedFallbackFs.mkdirs(dirToCreate, permission); > } catch (IOException e) { > if (LOG.isDebugEnabled()) { > StringBuilder msg = > new StringBuilder("Failed to create ").append(dirToCreate) > .append(" at fallback : ") > .append(linkedFallbackFs.getUri()); > LOG.debug(msg.toString(), e); > } > return false; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.
[ https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174438#comment-17174438 ] Stephen O'Donnell commented on HDFS-15493: -- [~smarthan]Thanks for the update. I think we are mostly good now. Just 2 more things: 1) You have a blank like at line 309 in FSImageFormatPBINode.java: {code} private void addToCacheAndBlockMap(final ArrayList inodeList) { >> This line is blank final ArrayList inodes = new ArrayList<>(inodeList); nameCacheUpdateExecutor.submit( {code} 2. I discussed this change with one of my colleagues, and he suggested we extend the unit test you added to take some snapshots and rename some files, as this will create some inodeReference objects, and hence test that code path too. Then we can dump the filesystem tree before and after saving the namespace and ensure they are identical. I have adjusted your test to do this: {code} @Test public void testUpdateBlocksMapAndNameCacheAsync() throws IOException { Configuration conf = new Configuration(); MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build(); cluster.waitActive(); DistributedFileSystem fs = cluster.getFileSystem(); FSDirectory fsdir = cluster.getNameNode().namesystem.getFSDirectory(); File workingDir = GenericTestUtils.getTestDir(); File preRestartTree = new File(workingDir,"preRestartTree"); File postRestartTree = new File(workingDir,"postRestartTree"); Path baseDir = new Path("/user/foo"); fs.mkdirs(baseDir); fs.allowSnapshot(baseDir); for (int i = 0; i < 5; i++) { Path dir = new Path(baseDir, Integer.toString(i)); fs.mkdirs(dir); for (int j = 0; j < 5; j++) { Path file = new Path(dir, Integer.toString(j)); FSDataOutputStream os = fs.create(file); os.write((byte) j); os.close(); } fs.createSnapshot(baseDir, "snap_"+i); fs.rename(new Path(dir, "0"), new Path(dir, "renamed")); } SnapshotTestHelper.dumpTree2File(fsdir, preRestartTree); // checkpoint fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER); fs.saveNamespace(); fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE); cluster.restartNameNode(); cluster.waitActive(); fs = cluster.getFileSystem(); fsdir = cluster.getNameNode().namesystem.getFSDirectory(); // Ensure all the files created above exist, and blocks is correct. for (int i = 0; i < 5; i++) { Path dir = new Path(baseDir, Integer.toString(i)); assertTrue(fs.getFileStatus(dir).isDirectory()); for (int j = 0; j < 5; j++) { Path file = new Path(dir, Integer.toString(j)); if (j == 0) { file = new Path(dir, "renamed"); } FSDataInputStream in = fs.open(file); int n = in.readByte(); assertEquals(j, n); in.close(); } } SnapshotTestHelper.dumpTree2File(fsdir, postRestartTree); SnapshotTestHelper.compareDumpedTreeInFile( preRestartTree, postRestartTree, true); } {code} If you could fix the blank line add in the above unit test I am +1 to commit this. Thanks for all your work on this. > Update block map and name cache in parallel while loading fsimage. > -- > > Key: HDFS-15493 > URL: https://issues.apache.org/jira/browse/HDFS-15493 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Chengwei Wang >Priority: Major > Attachments: HDFS-15493.001.patch, HDFS-15493.002.patch, > HDFS-15493.003.patch, HDFS-15493.004.patch, HDFS-15493.005.patch, > HDFS-15493.006.patch, fsimage-loading.log > > > While loading INodeDirectorySection of fsimage, it will update name cache and > block map after added inode file to inode directory. It would reduce time > cost of fsimage loading to enable these steps run in parallel. > In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load > fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost > reduc to 410s. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15518) Wrong operation name in FsNamesystem for listSnapshots
[ https://issues.apache.org/jira/browse/HDFS-15518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174405#comment-17174405 ] Shashikant Banerjee commented on HDFS-15518: [~hemanthboyina], yes..it should be "listSnapshots" > Wrong operation name in FsNamesystem for listSnapshots > -- > > Key: HDFS-15518 > URL: https://issues.apache.org/jira/browse/HDFS-15518 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mukul Kumar Singh >Priority: Major > > List snapshots makes use of listSnapshotDirectory as the string in place of > ListSnapshot. > https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L7026 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15521) Remove INode.dumpTreeRecursively()
[ https://issues.apache.org/jira/browse/HDFS-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174390#comment-17174390 ] Tsz-wo Sze commented on HDFS-15521: --- Thanks. I already have a patch. Just waiting for HDFS-15520. > Remove INode.dumpTreeRecursively() > -- > > Key: HDFS-15521 > URL: https://issues.apache.org/jira/browse/HDFS-15521 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, test >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > > In HDFS-15520, the same feature of INode.dumpTreeRecursively() is implemented > by NamespacePrintVisitor. Therefore, the old code can be cleaned up. > Note that INode.dumpTreeRecursively() is only used in tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15521) Remove INode.dumpTreeRecursively()
[ https://issues.apache.org/jira/browse/HDFS-15521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174269#comment-17174269 ] Yuanbo Liu commented on HDFS-15521: --- Thanks for opening this issue, not sure whether you're working on this, if not, I'd glad to help on it. > Remove INode.dumpTreeRecursively() > -- > > Key: HDFS-15521 > URL: https://issues.apache.org/jira/browse/HDFS-15521 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode, test >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > > In HDFS-15520, the same feature of INode.dumpTreeRecursively() is implemented > by NamespacePrintVisitor. Therefore, the old code can be cleaned up. > Note that INode.dumpTreeRecursively() is only used in tests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15515) mkdirs on fallback should throw IOE out instead of suppressing and returning false
[ https://issues.apache.org/jira/browse/HDFS-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174175#comment-17174175 ] Hadoop QA commented on HDFS-15515: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 9s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 22s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 4m 9s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 54s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 54s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || |
[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.
[ https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17174108#comment-17174108 ] Hadoop QA commented on HDFS-15493: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 54s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 43s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 45s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} |