[jira] [Commented] (HDFS-15514) Remove useless dfs.webhdfs.enabled
[ https://issues.apache.org/jira/browse/HDFS-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172062#comment-17172062 ] Hadoop QA commented on HDFS-15514: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 3s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 51s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 16s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 27s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 54s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color
[jira] [Created] (HDFS-15515) mkdirs on fallback should throw IOE out instead of suppressing and returning false
Uma Maheswara Rao G created HDFS-15515: -- Summary: mkdirs on fallback should throw IOE out instead of suppressing and returning false Key: HDFS-15515 URL: https://issues.apache.org/jira/browse/HDFS-15515 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Uma Maheswara Rao G Assignee: Uma Maheswara Rao G Currently when doing mkdirs on fallback dir, we catching IOE and returning false. I think we should just throw IOE out as the fs#mkdirs throws IOE out. I noticed a case when we attempt to create .reserved dirs, NN throws HadoopIAE. But we will catch and return false. Here exception should be thrown out. {code:java} try { return linkedFallbackFs.mkdirs(dirToCreate, permission); } catch (IOException e) { if (LOG.isDebugEnabled()) { StringBuilder msg = new StringBuilder("Failed to create ").append(dirToCreate) .append(" at fallback : ") .append(linkedFallbackFs.getUri()); LOG.debug(msg.toString(), e); } return false; } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15509) Set safemode should not fail if one of the namenode is down.
[ https://issues.apache.org/jira/browse/HDFS-15509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172046#comment-17172046 ] Leon Gao edited comment on HDFS-15509 at 8/6/20, 5:58 AM: -- [~ayushtkn] Yeah, I agree on the point that to make both NNs consistent is ideal. But the confusing part is the effect of the command can be different based on the sequence of nn in the configuration. If the nn0 is active and nn1 is down then setSafemode will work for nn0 regardless of the state of nn1, but not the same case if nn1 is active and nn0 is down.. I think this is already brought up there by [~jiangjianfei] in the ticket [comment |https://issues.apache.org/jira/browse/HDFS-8277?focusedCommentId=16580777&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16580777] as well. was (Author: leong): [~ayushtkn] Yeah, I agree on the point that to make both NNs consistent is ideal. But the confusing part is the effect of the command can be different based on the sequence of nn in the configuration. If the nn0 is active and nn1 is down then setSafemode will work for nn0 regardless of the state of nn1, but not the same case if nn1 is active and nn0 is down.. I think this is already brought up there by [~jiangjianfei] in the ticket comment as well. > Set safemode should not fail if one of the namenode is down. > > > Key: HDFS-15509 > URL: https://issues.apache.org/jira/browse/HDFS-15509 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.3.0 >Reporter: Leon Gao >Assignee: Leon Gao >Priority: Minor > Attachments: HDFS-15509.patch > > > When the first namenode (let's say nn0) is down, set safemode command will > always fail unless users manually update the configuration. This is > distracting when debugging issues. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15509) Set safemode should not fail if one of the namenode is down.
[ https://issues.apache.org/jira/browse/HDFS-15509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172046#comment-17172046 ] Leon Gao edited comment on HDFS-15509 at 8/6/20, 5:57 AM: -- [~ayushtkn] Yeah, I agree on the point that to make both NNs consistent is ideal. But the confusing part is the effect of the command can be different based on the sequence of nn in the configuration. If the nn0 is active and nn1 is down then setSafemode will work for nn0 regardless of the state of nn1, but not the same case if nn1 is active and nn0 is down.. I think this is already brought up there by [~jiangjianfei] in the ticket comment as well. was (Author: leong): [~ayushtkn] Yeah, I agree on the point that to make both NNs consistent is ideal. But the confusing part is the effect of the command can be different based on the sequence of nn in the configuration. If the nn0 is active and nn1 is down then setSafemode will work for nn0 regardless of the state of nn1, but not the same case if nn1 is active and nn0 is down.. I think this is brought up there by [~jiangjianfei] in the ticket [comment|https://issues.apache.org/jira/browse/HDFS-8277?focusedCommentId=16580777&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16580777] as well. > Set safemode should not fail if one of the namenode is down. > > > Key: HDFS-15509 > URL: https://issues.apache.org/jira/browse/HDFS-15509 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.3.0 >Reporter: Leon Gao >Assignee: Leon Gao >Priority: Minor > Attachments: HDFS-15509.patch > > > When the first namenode (let's say nn0) is down, set safemode command will > always fail unless users manually update the configuration. This is > distracting when debugging issues. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15509) Set safemode should not fail if one of the namenode is down.
[ https://issues.apache.org/jira/browse/HDFS-15509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172046#comment-17172046 ] Leon Gao commented on HDFS-15509: - [~ayushtkn] Yeah, I agree on the point that to make both NNs consistent is ideal. But the confusing part is the effect of the command can be different based on the sequence of nn in the configuration. If the nn0 is active and nn1 is down then setSafemode will work for nn0 regardless of the state of nn1, but not the same case if nn1 is active and nn0 is down.. I think this is brought up there by [~jiangjianfei] in the ticket [comment|https://issues.apache.org/jira/browse/HDFS-8277?focusedCommentId=16580777&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16580777] as well. > Set safemode should not fail if one of the namenode is down. > > > Key: HDFS-15509 > URL: https://issues.apache.org/jira/browse/HDFS-15509 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.3.0 >Reporter: Leon Gao >Assignee: Leon Gao >Priority: Minor > Attachments: HDFS-15509.patch > > > When the first namenode (let's say nn0) is down, set safemode command will > always fail unless users manually update the configuration. This is > distracting when debugging issues. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15514) Remove useless dfs.webhdfs.enabled
[ https://issues.apache.org/jira/browse/HDFS-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fei Hui updated HDFS-15514: --- Status: Patch Available (was: Open) > Remove useless dfs.webhdfs.enabled > -- > > Key: HDFS-15514 > URL: https://issues.apache.org/jira/browse/HDFS-15514 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: HDFS-15514.001.patch > > > After HDFS-7985 & HDFS-8349, " dfs.webhdfs.enabled" is useless. We should > remove it from code base. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15514) Remove useless dfs.webhdfs.enabled
[ https://issues.apache.org/jira/browse/HDFS-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fei Hui updated HDFS-15514: --- Attachment: HDFS-15514.001.patch > Remove useless dfs.webhdfs.enabled > -- > > Key: HDFS-15514 > URL: https://issues.apache.org/jira/browse/HDFS-15514 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: HDFS-15514.001.patch > > > After HDFS-7985 & HDFS-8349, " dfs.webhdfs.enabled" is useless. We should > remove it from code base. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15514) Remove useless dfs.webhdfs.enabled
Fei Hui created HDFS-15514: -- Summary: Remove useless dfs.webhdfs.enabled Key: HDFS-15514 URL: https://issues.apache.org/jira/browse/HDFS-15514 Project: Hadoop HDFS Issue Type: Test Components: test Reporter: Fei Hui Assignee: Fei Hui After HDFS-7985 & HDFS-8349, " dfs.webhdfs.enabled" is useless. We should remove it from code base. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15508) [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module
[ https://issues.apache.org/jira/browse/HDFS-15508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171853#comment-17171853 ] Akira Ajisaka commented on HDFS-15508: -- There seems to be a connection timeout between Jenkins master and worker. After the timeout, the worker still worked and write the output to the JIRA, however, the links become dead because Jenkins master regarded the job as aborted. Started the job again: https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/41/ > [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module > - > > Key: HDFS-15508 > URL: https://issues.apache.org/jira/browse/HDFS-15508 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie > Attachments: HDFS-15508.01.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java:21: > error: reference not found > [ERROR] * Implementations should extend {@link > AbstractDelegationTokenSecretManager}. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a7dde76a4ba2942f60bf6230ec9ed6e1 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15513) Allow client to query snapshot status on one directory
[ https://issues.apache.org/jira/browse/HDFS-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siyao Meng updated HDFS-15513: -- Description: Alternatively, we can allow the client to query snapshot status on *a list of* given directories by the client. Thoughts? Rationale: At the moment, we could only retrieve the full list of snapshottable directories with [{{getSnapshottableDirListing()}}|https://github.com/apache/hadoop/blob/233619a0a462ae2eb7e7253b6bb8ae48eaa5eb19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6986-L6994]. This leads to the inefficiency In HDFS-15492 that we have to get the *entire* list of snapshottable directory to check if a file being deleted is inside a snapshottable directory. was: Alternatively, we can allow the client to query snapshot status on *a list of* directories, if necessary. Thoughts? Rationale: At the moment, we could only retrieve the full list of snapshottable directories with [{{getSnapshottableDirListing()}}|https://github.com/apache/hadoop/blob/233619a0a462ae2eb7e7253b6bb8ae48eaa5eb19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6986-L6994]. This leads to the inefficiency In HDFS-15492 that we have to get the *entire* list of snapshottable directory to check if a file being deleted is inside a snapshottable directory. > Allow client to query snapshot status on one directory > -- > > Key: HDFS-15513 > URL: https://issues.apache.org/jira/browse/HDFS-15513 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, hdfs-client >Affects Versions: 3.3.0 >Reporter: Siyao Meng >Priority: Major > > Alternatively, we can allow the client to query snapshot status on *a list > of* given directories by the client. Thoughts? > Rationale: > At the moment, we could only retrieve the full list of snapshottable > directories with > [{{getSnapshottableDirListing()}}|https://github.com/apache/hadoop/blob/233619a0a462ae2eb7e7253b6bb8ae48eaa5eb19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6986-L6994]. > This leads to the inefficiency In HDFS-15492 that we have to get the > *entire* list of snapshottable directory to check if a file being deleted is > inside a snapshottable directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15513) Allow client to query snapshot status on one directory
Siyao Meng created HDFS-15513: - Summary: Allow client to query snapshot status on one directory Key: HDFS-15513 URL: https://issues.apache.org/jira/browse/HDFS-15513 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs, hdfs-client Affects Versions: 3.3.0 Reporter: Siyao Meng Alternatively, we can allow the client to query snapshot status on *a list of* directories, if necessary. Thoughts? Rationale: At the moment, we could only retrieve the full list of snapshottable directories with [{{getSnapshottableDirListing()}}|https://github.com/apache/hadoop/blob/233619a0a462ae2eb7e7253b6bb8ae48eaa5eb19/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6986-L6994]. This leads to the inefficiency In HDFS-15492 that we have to get the *entire* list of snapshottable directory to check if a file being deleted is inside a snapshottable directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP
[ https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171585#comment-17171585 ] Mingliang Liu commented on HDFS-15288: -- Looks great. Thanks [~ayushtkn] > Add Available Space Rack Fault Tolerant BPP > --- > > Key: HDFS-15288 > URL: https://issues.apache.org/jira/browse/HDFS-15288 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, > HDFS-15288-03.patch, HDFS-15288-Addendum-01.patch > > > The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block > Placement policy, which makes it apt for Replicated files. But not very > efficient for EC files, which by default use. > {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having > similar optimization as ASBPP where as keeping the spread of Blocks to max > racks, i.e as RackFaultTolerantBPP. > This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the > {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of > optimization same as ASBPP -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15508) [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module
[ https://issues.apache.org/jira/browse/HDFS-15508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171513#comment-17171513 ] Hadoop QA commented on HDFS-15508: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 16s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | |
[jira] [Commented] (HDFS-15509) Set safemode should not fail if one of the namenode is down.
[ https://issues.apache.org/jira/browse/HDFS-15509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171551#comment-17171551 ] Ayush Saxena commented on HDFS-15509: - Hey [~LeonG], I too have just followed that discussion, the reason there to keep both Namenodes in same state is that when an admin triggers the {{safemode}} command, he expects that now the cluster is in readonly mode, the cluster won't respond to any write calls, but in case, if one namenode is down and ignored, In that case, If the other Namenode comes alive, after this safemode command execution, and becomes active namenode due to failover, then the objective to make the cluster read-only won't hold and the cluster shall start serving the write request as well. This is one perspective, there are indeed many ways of looking at it, if you tend to have some opinions feel free to share them at HDFS-8277 > Set safemode should not fail if one of the namenode is down. > > > Key: HDFS-15509 > URL: https://issues.apache.org/jira/browse/HDFS-15509 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.3.0 >Reporter: Leon Gao >Assignee: Leon Gao >Priority: Minor > Attachments: HDFS-15509.patch > > > When the first namenode (let's say nn0) is down, set safemode command will > always fail unless users manually update the configuration. This is > distracting when debugging issues. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15506) [JDK 11] Fix javadoc errors in hadoop-hdfs module
[ https://issues.apache.org/jira/browse/HDFS-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171505#comment-17171505 ] Xieming Li commented on HDFS-15506: --- Hi, [~aajisaka] Thank you for reporting. I have uploaded a patch which has been tested in my local env. https://gist.github.com/risyomei/68055591dc7be0dbb55d3e1d3c963cff > [JDK 11] Fix javadoc errors in hadoop-hdfs module > - > > Key: HDFS-15506 > URL: https://issues.apache.org/jira/browse/HDFS-15506 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15506.001.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java:43: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:682: > error: malformed HTML > [ERROR]* a NameNode per second. Values <= 0 disable throttling. This > affects > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:1780: > error: exception not thrown: java.io.FileNotFoundException > [ERROR]* @throws FileNotFoundException > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java:176: > error: @param name not found > [ERROR]* @param mtime The snapshot creation time set by Time.now(). > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:2187: > error: exception not thrown: java.lang.Exception > [ERROR]* @exception Exception if the filesystem does not exist. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a0c16f0408a623e798dd7df29fbddf82 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171504#comment-17171504 ] Xieming Li commented on HDFS-15507: --- Hi, [~aajisaka], thank you for reporting. I have uploaded a patch which has been tested in my local env. https://gist.github.com/risyomei/4e50bc64a4492d5a212a007242ca86c0 > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15507.001.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15508) [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module
[ https://issues.apache.org/jira/browse/HDFS-15508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171540#comment-17171540 ] Ayush Saxena commented on HDFS-15508: - Thanx [~aajisaka] for the fix. Changes LGTM +1 I doubt the jenkins report though, It actually failed in mvn install itself, still there is a result. The checkstyle link is not present as well [https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/39/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt] > [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module > - > > Key: HDFS-15508 > URL: https://issues.apache.org/jira/browse/HDFS-15508 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie > Attachments: HDFS-15508.01.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java:21: > error: reference not found > [ERROR] * Implementations should extend {@link > AbstractDelegationTokenSecretManager}. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a7dde76a4ba2942f60bf6230ec9ed6e1 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP
[ https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171534#comment-17171534 ] Ayush Saxena commented on HDFS-15288: - Have added the release notes, The BPP is described in the documents as well, as part of HDFS-14546. Thanx for pointing it out :) > Add Available Space Rack Fault Tolerant BPP > --- > > Key: HDFS-15288 > URL: https://issues.apache.org/jira/browse/HDFS-15288 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, > HDFS-15288-03.patch, HDFS-15288-Addendum-01.patch > > > The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block > Placement policy, which makes it apt for Replicated files. But not very > efficient for EC files, which by default use. > {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having > similar optimization as ASBPP where as keeping the spread of Blocks to max > racks, i.e as RackFaultTolerantBPP. > This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the > {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of > optimization same as ASBPP -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15507) [JDK 11] Fix javadoc errors in hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-15507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xieming Li updated HDFS-15507: -- Attachment: HDFS-15507.001.patch Status: Patch Available (was: Open) > [JDK 11] Fix javadoc errors in hadoop-hdfs-client module > > > Key: HDFS-15507 > URL: https://issues.apache.org/jira/browse/HDFS-15507 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15507.001.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientGSIContext.java:32: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1245: > error: unexpected text > [ERROR]* Same as {@link #create(String, FsPermission, EnumSet, boolean, > short, long, > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java:161: > error: reference not found > [ERROR]* {@link HdfsConstants#LEASE_HARDLIMIT_PERIOD hard limit}. Until > the > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/7ab1c48a9bd7a0fdb11fa82eb04874d5 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP
[ https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15288: Release Note: Added a new BlockPlacementPolicy: "AvailableSpaceRackFaultTolerantBlockPlacementPolicy" which uses the same optimization logic as the AvailableSpaceBlockPlacementPolicy along with spreading the replicas across maximum number of racks, similar to BlockPlacementPolicyRackFaultTolerant. The BPP can be configured by setting the blockplacement policy class as org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceRackFaultTolerantBlockPlacementPolicy Issue Type: New Feature (was: Improvement) > Add Available Space Rack Fault Tolerant BPP > --- > > Key: HDFS-15288 > URL: https://issues.apache.org/jira/browse/HDFS-15288 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, > HDFS-15288-03.patch, HDFS-15288-Addendum-01.patch > > > The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block > Placement policy, which makes it apt for Replicated files. But not very > efficient for EC files, which by default use. > {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having > similar optimization as ASBPP where as keeping the spread of Blocks to max > racks, i.e as RackFaultTolerantBPP. > This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the > {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of > optimization same as ASBPP -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15508) [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module
[ https://issues.apache.org/jira/browse/HDFS-15508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171463#comment-17171463 ] Akira Ajisaka commented on HDFS-15508: -- Thank you for your review. bq. Will this get a clean javadoc report? Started https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/39/ > [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module > - > > Key: HDFS-15508 > URL: https://issues.apache.org/jira/browse/HDFS-15508 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: newbie > Attachments: HDFS-15508.01.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java:21: > error: reference not found > [ERROR] * Implementations should extend {@link > AbstractDelegationTokenSecretManager}. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a7dde76a4ba2942f60bf6230ec9ed6e1 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15506) [JDK 11] Fix javadoc errors in hadoop-hdfs module
[ https://issues.apache.org/jira/browse/HDFS-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xieming Li updated HDFS-15506: -- Attachment: HDFS-15506.001.patch Status: Patch Available (was: Open) > [JDK 11] Fix javadoc errors in hadoop-hdfs module > - > > Key: HDFS-15506 > URL: https://issues.apache.org/jira/browse/HDFS-15506 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Major > Labels: newbie > Attachments: HDFS-15506.001.patch > > > {noformat} > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java:43: > error: self-closing element not allowed > [ERROR] * > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:682: > error: malformed HTML > [ERROR]* a NameNode per second. Values <= 0 disable throttling. This > affects > [ERROR]^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:1780: > error: exception not thrown: java.io.FileNotFoundException > [ERROR]* @throws FileNotFoundException > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java:176: > error: @param name not found > [ERROR]* @param mtime The snapshot creation time set by Time.now(). > [ERROR] ^ > [ERROR] > /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:2187: > error: exception not thrown: java.lang.Exception > [ERROR]* @exception Exception if the filesystem does not exist. > [ERROR] ^ > {noformat} > Full error log: > https://gist.github.com/aajisaka/a0c16f0408a623e798dd7df29fbddf82 > How to reproduce the failure: > * Remove {{true}} from pom.xml > * Run {{mvn process-sources javadoc:javadoc-no-fork}} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org