[jira] [Commented] (HDFS-15236) Upgrade googletest to the latest version
[ https://issues.apache.org/jira/browse/HDFS-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113772#comment-17113772 ] Akira Ajisaka commented on HDFS-15236: -- It seems that my compile error is related to protocol buffers, not related to gtest. Sorry. > Upgrade googletest to the latest version > > > Key: HDFS-15236 > URL: https://issues.apache.org/jira/browse/HDFS-15236 > Project: Hadoop HDFS > Issue Type: Improvement > Components: native, test >Reporter: Akira Ajisaka >Priority: Major > > Now libhdfspp is using gmock-1.7.0 with the patch in HDFS-15232. gmock was > moved to googletest and the latest version is 1.10.0. Let's upgrade it to > remove our own patch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15236) Upgrade googletest to the latest version
[ https://issues.apache.org/jira/browse/HDFS-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113771#comment-17113771 ] Akira Ajisaka commented on HDFS-15236: -- I tried compiling again and it failed. Full log: https://gist.github.com/aajisaka/6935858b487a199b8254ddaaf889bd90 {noformat} aajisaka-x1carbon% protoc --version libprotoc 3.7.1 aajisaka-x1carbon% mvn -version Apache Maven 3.6.3 Maven home: /usr/share/maven Java version: 11.0.7, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64 Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "5.4.0-31-generic", arch: "amd64", family: "unix" {noformat} > Upgrade googletest to the latest version > > > Key: HDFS-15236 > URL: https://issues.apache.org/jira/browse/HDFS-15236 > Project: Hadoop HDFS > Issue Type: Improvement > Components: native, test >Reporter: Akira Ajisaka >Priority: Major > > Now libhdfspp is using gmock-1.7.0 with the patch in HDFS-15232. gmock was > moved to googletest and the latest version is 1.10.0. Let's upgrade it to > remove our own patch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15370) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory
[ https://issues.apache.org/jira/browse/HDFS-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113764#comment-17113764 ] Srinivasu Majeti commented on HDFS-15370: - Thank you [~umamaheswararao] for your quick confirmation. So right now , i have two clarifications in this viewfs context: # Do we support symlink creation only in this usecase of viewfs to actual namespace ( which is actually targeting only directories really in the target namespace ) through core-site.xml ? Or we can do it for files also ? Dont remember if there is any other way to do it . # So we get isSymlink() as true only for symlink_name (as configured in fs.viewfs.mounttable.fsname.link./symlink_name=hdfs://namespace/target_dir) though core-site customisation and no other hdfs cli or feature does it right ? > listStatus and getFileStatus behave inconsistent in the case of ViewFs > implementation for isDirectory > - > > Key: HDFS-15370 > URL: https://issues.apache.org/jira/browse/HDFS-15370 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0, 3.1.0 >Reporter: Srinivasu Majeti >Priority: Major > Labels: viewfs > > listStatus implementation in ViewFs and getFileStatus does not return > consistent values for an element on isDirectory value. listStatus returns > isDirectory of all softlinks as false and getFileStatus returns isDirectory > as true. > {code:java} > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" > FileStatus of viewfs://c3121/testme21may isDirectory:false > FileStatus of viewfs://c3121/tmp isDirectory:false > FileStatus of viewfs://c3121/foo isDirectory:false > FileStatus of viewfs://c3121/tmp21may isDirectory:false > FileStatus of viewfs://c3121/testme isDirectory:false > FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false > FileStatus of / isDirectory:true > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 > FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false > FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true > FileStatus of /testme2 isDirectory:true <--- returns true > [hdfs@c3121-node2 ~]$ {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15370) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory
[ https://issues.apache.org/jira/browse/HDFS-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113760#comment-17113760 ] Uma Maheswara Rao G commented on HDFS-15370: Thank you [~smajeti] for reporting it. I think you are right, in listStatus we have checks for isLink but we don't have it for getFileStatus. I think the motivation of showing the isDir as false was probably due to the fact that they are links. By not representing link as regular directories in ls, would create less confusions on navigating in tree. If any others remembers the motivations why it was done like that please feel free to comment. Coming to getFileStatus, I think we can make it similar to listStatus. ListStatus checks: {code:java} if (inode.isLink()) { INodeLink link = (INodeLink) inode; result[i++] = new FileStatus(0, false, 0, 0, creationTime, creationTime, PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(), link.getTargetLink(), new Path(inode.fullPath).makeQualified( myUri, null)); } else { result[i++] = new FileStatus(0, true, 0, 0, creationTime, creationTime, PERMISSION_555, ugi.getShortUserName(), ugi.getGroupNames()[0], new Path(inode.fullPath).makeQualified( myUri, null)); } {code} GetFileStatus: {code:java} return new FileStatus(0, true, 0, 0, creationTime, creationTime, PERMISSION_555, ugi.getShortUserName(), ugi.getPrimaryGroupName(), new Path(theInternalDir.fullPath).makeQualified( myUri, ROOT_PATH)); {code} > listStatus and getFileStatus behave inconsistent in the case of ViewFs > implementation for isDirectory > - > > Key: HDFS-15370 > URL: https://issues.apache.org/jira/browse/HDFS-15370 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0, 3.1.0 >Reporter: Srinivasu Majeti >Priority: Major > Labels: viewfs > > listStatus implementation in ViewFs and getFileStatus does not return > consistent values for an element on isDirectory value. listStatus returns > isDirectory of all softlinks as false and getFileStatus returns isDirectory > as true. > {code:java} > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" > FileStatus of viewfs://c3121/testme21may isDirectory:false > FileStatus of viewfs://c3121/tmp isDirectory:false > FileStatus of viewfs://c3121/foo isDirectory:false > FileStatus of viewfs://c3121/tmp21may isDirectory:false > FileStatus of viewfs://c3121/testme isDirectory:false > FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false > FileStatus of / isDirectory:true > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 > FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false > FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true > FileStatus of /testme2 isDirectory:true <--- returns true > [hdfs@c3121-node2 ~]$ {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
[ https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113748#comment-17113748 ] Xiaoqiao He commented on HDFS-15368: Thanks [~vagarychen] for your information, it is easy to reproduce this case, I attach failure test log [^TestBalancerWithHANameNodes.testBalancerObserver.log], may the log could help to dig the root cause. > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally > > > Key: HDFS-15368 > URL: https://issues.apache.org/jira/browse/HDFS-15368 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Labels: balancer, test > Attachments: HDFS-15368.001.patch, > TestBalancerWithHANameNodes.testBalancerObserver.log > > > When I am working on HDFS-13183, I found that > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally, > because the following code segment. Consider there are 1 ANN + 1 SBN + 2ONN, > when invoke getBlocks with opening Observer Read feature, it could request > any one of two ObserverNN based on my observation. So only verify the first > ObserverNN and check times of invoke #getBlocks is not expected. > {code:java} > for (int i = 0; i < cluster.getNumNameNodes(); i++) { > // First observer node is at idx 2, or 3 if 2 has been shut down > // It should get both getBlocks calls, all other NNs should see 0 > calls > int expectedObserverIdx = withObserverFailure ? 3 : 2; > int expectedCount = (i == expectedObserverIdx) ? 2 : 0; > verify(namesystemSpies.get(i), times(expectedCount)) > .getBlocks(any(), anyLong(), anyLong()); > } > {code} > cc [~xkrogen],[~weichiu]. I am not very familiar for Observer Read feature, > would you like give some suggestions? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
[ https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15368: --- Attachment: TestBalancerWithHANameNodes.testBalancerObserver.log > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally > > > Key: HDFS-15368 > URL: https://issues.apache.org/jira/browse/HDFS-15368 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Labels: balancer, test > Attachments: HDFS-15368.001.patch, > TestBalancerWithHANameNodes.testBalancerObserver.log > > > When I am working on HDFS-13183, I found that > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally, > because the following code segment. Consider there are 1 ANN + 1 SBN + 2ONN, > when invoke getBlocks with opening Observer Read feature, it could request > any one of two ObserverNN based on my observation. So only verify the first > ObserverNN and check times of invoke #getBlocks is not expected. > {code:java} > for (int i = 0; i < cluster.getNumNameNodes(); i++) { > // First observer node is at idx 2, or 3 if 2 has been shut down > // It should get both getBlocks calls, all other NNs should see 0 > calls > int expectedObserverIdx = withObserverFailure ? 3 : 2; > int expectedCount = (i == expectedObserverIdx) ? 2 : 0; > verify(namesystemSpies.get(i), times(expectedCount)) > .getBlocks(any(), anyLong(), anyLong()); > } > {code} > cc [~xkrogen],[~weichiu]. I am not very familiar for Observer Read feature, > would you like give some suggestions? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15322) Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris schemes are same.
[ https://issues.apache.org/jira/browse/HDFS-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113745#comment-17113745 ] Hudson commented on HDFS-15322: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18287 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18287/]) HDFS-15322. Make NflyFS to work when ViewFsOverloadScheme's scheme and (github: rev 4734c77b4b64b7c6432da4cc32881aba85f94ea1) * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/FsGetter.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/HCFSMountTableConfigLoader.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemOverloadSchemeWithHdfsScheme.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/NflyFSystem.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java > Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris > schemes are same. > > > Key: HDFS-15322 > URL: https://issues.apache.org/jira/browse/HDFS-15322 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, nflyFs, viewfs, viewfsOverloadScheme >Affects Versions: 3.2.1 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Fix For: 3.4.0 > > > Currently Nfly mount link will not work when we use ViewFSOverloadScheme. > Because when when configured scheme is hdfs and target uris scheme also hdfs, > it will face the similar issue of looping what we discussed in design. We > need to use FsGetter to handle looping. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15322) Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris schemes are same.
[ https://issues.apache.org/jira/browse/HDFS-15322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-15322: --- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) I have just merged this to trunk!! Thanks for the review [~weichiu] > Make NflyFS to work when ViewFsOverloadScheme's scheme and target uris > schemes are same. > > > Key: HDFS-15322 > URL: https://issues.apache.org/jira/browse/HDFS-15322 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs, nflyFs, viewfs, viewfsOverloadScheme >Affects Versions: 3.2.1 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Fix For: 3.4.0 > > > Currently Nfly mount link will not work when we use ViewFSOverloadScheme. > Because when when configured scheme is hdfs and target uris scheme also hdfs, > it will face the similar issue of looping what we discussed in design. We > need to use FsGetter to handle looping. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15355) Make the default block storage policy ID configurable
[ https://issues.apache.org/jira/browse/HDFS-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113699#comment-17113699 ] Yang Yun commented on HDFS-15355: - Thanks [~elgoiri] for the reivew. Updated to HDFS-15355.011.patch with doc changes of ArchivalStorage.md. > Make the default block storage policy ID configurable > - > > Key: HDFS-15355 > URL: https://issues.apache.org/jira/browse/HDFS-15355 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15355.001.patch, HDFS-15355.002.patch, > HDFS-15355.003.patch, HDFS-15355.004.patch, HDFS-15355.005.patch, > HDFS-15355.006.patch, HDFS-15355.007.patch, HDFS-15355.008.patch, > HDFS-15355.009.patch, HDFS-15355.010.patch, HDFS-15355.011.patch > > > Make the default block storage policy ID configurable. Sometime we want to > use different storage policy ID from the startup of cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15355) Make the default block storage policy ID configurable
[ https://issues.apache.org/jira/browse/HDFS-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15355: Attachment: HDFS-15355.011.patch Status: Patch Available (was: Open) > Make the default block storage policy ID configurable > - > > Key: HDFS-15355 > URL: https://issues.apache.org/jira/browse/HDFS-15355 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15355.001.patch, HDFS-15355.002.patch, > HDFS-15355.003.patch, HDFS-15355.004.patch, HDFS-15355.005.patch, > HDFS-15355.006.patch, HDFS-15355.007.patch, HDFS-15355.008.patch, > HDFS-15355.009.patch, HDFS-15355.010.patch, HDFS-15355.011.patch > > > Make the default block storage policy ID configurable. Sometime we want to > use different storage policy ID from the startup of cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15355) Make the default block storage policy ID configurable
[ https://issues.apache.org/jira/browse/HDFS-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15355: Status: Open (was: Patch Available) > Make the default block storage policy ID configurable > - > > Key: HDFS-15355 > URL: https://issues.apache.org/jira/browse/HDFS-15355 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15355.001.patch, HDFS-15355.002.patch, > HDFS-15355.003.patch, HDFS-15355.004.patch, HDFS-15355.005.patch, > HDFS-15355.006.patch, HDFS-15355.007.patch, HDFS-15355.008.patch, > HDFS-15355.009.patch, HDFS-15355.010.patch > > > Make the default block storage policy ID configurable. Sometime we want to > use different storage policy ID from the startup of cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15236) Upgrade googletest to the latest version
[ https://issues.apache.org/jira/browse/HDFS-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113645#comment-17113645 ] Siyao Meng commented on HDFS-15236: --- [~aajisaka] I am able to compile with {{mvn install -Pnative -DskipTests -e}} on Ubuntu 20.04 LTS with OpenJDK 11.0.7. {code:bash} $ protoc --version libprotoc 3.7.1 $ mvn -version Apache Maven 3.6.3 Maven home: /usr/share/maven Java version: 11.0.7, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64 Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "5.4.0-31-generic", arch: "amd64", family: "unix" $ JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 mvn install -Pnative -DskipTests -e ... [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 07:53 min [INFO] Finished at: 2020-05-21T17:09:52-07:00 [INFO] {code} > Upgrade googletest to the latest version > > > Key: HDFS-15236 > URL: https://issues.apache.org/jira/browse/HDFS-15236 > Project: Hadoop HDFS > Issue Type: Improvement > Components: native, test >Reporter: Akira Ajisaka >Priority: Major > > Now libhdfspp is using gmock-1.7.0 with the patch in HDFS-15232. gmock was > moved to googletest and the latest version is 1.10.0. Let's upgrade it to > remove our own patch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15362) FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all distinct blocks
[ https://issues.apache.org/jira/browse/HDFS-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113557#comment-17113557 ] Hadoop QA commented on HDFS-15362: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 1s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 18s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29349/artifact/out/Dockerfile | | JIRA Issue | HDFS-15362 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13003685/HDFS-15362.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 663b71c2bc26 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ac4540dd8e2 | | Default Java | Private
[jira] [Commented] (HDFS-13639) SlotReleaser is not fast enough
[ https://issues.apache.org/jira/browse/HDFS-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113526#comment-17113526 ] Hudson commented on HDFS-13639: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18285 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18285/]) HDFS-13639. SlotReleaser is not fast enough (#1885) (github: rev be374faf429d28561dd9c582f5c55451213d89a4) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/shortcircuit/DfsClientShmManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java > SlotReleaser is not fast enough > --- > > Key: HDFS-13639 > URL: https://issues.apache.org/jira/browse/HDFS-13639 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.4.0, 2.6.0, 3.0.2 > Environment: 1. YCSB: > {color:#00} recordcount=20 > fieldcount=1 > fieldlength=1000 > operationcount=1000 > > workload=com.yahoo.ycsb.workloads.CoreWorkload > > table=ycsb-test > columnfamily=C > readproportion=1 > updateproportion=0 > insertproportion=0 > scanproportion=0 > > maxscanlength=0 > requestdistribution=zipfian > > # default > readallfields=true > writeallfields=true > scanlengthdistribution=constan{color} > {color:#00}2. datanode:{color} > -Xmx2048m -Xms2048m -Xmn1024m -XX:MaxDirectMemorySize=1024m > -XX:MaxPermSize=256m -Xloggc:$run_dir/stdout/datanode_gc_${start_time}.log > -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError > -XX:HeapDumpPath=$log_dir -XX:+PrintGCApplicationStoppedTime > -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 > -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled > -XX:+CMSClassUnloadingEnabled -XX:CMSMaxAbortablePrecleanTime=1 > -XX:+CMSScavengeBeforeRemark -XX:+PrintPromotionFailure > -XX:+CMSConcurrentMTEnabled -XX:+ExplicitGCInvokesConcurrent > -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking > -verbose:gc -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps > {color:#00}3. regionserver:{color} > {color:#00}-Xmx10g -Xms10g -XX:MaxDirectMemorySize=10g > -XX:MaxGCPauseMillis=150 -XX:MaxTenuringThreshold=2 > -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=5 > -Xloggc:$run_dir/stdout/regionserver_gc_${start_time}.log -Xss256k > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=$log_dir -verbose:gc > -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime > -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy > -XX:+PrintTenuringDistribution -XX:+PrintSafepointStatistics > -XX:PrintSafepointStatisticsCount=1 -XX:PrintFLSStatistics=1 > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m > -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking > -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=65 > -XX:+ParallelRefProcEnabled -XX:ConcGCThreads=4 -XX:ParallelGCThreads=16 > -XX:G1HeapRegionSize=32m -XX:G1MixedGCCountTarget=64 > -XX:G1OldCSetRegionThresholdPercent=5{color} > {color:#00}block cache is disabled:{color}{color:#00} > hbase.bucketcache.size > 0.9 > {color} > >Reporter: Gang Xie >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-13639-2.4.diff, HDFS-13639.001.patch, > HDFS-13639.002.patch, ShortCircuitCache_new_slotReleaser.diff, > perf_after_improve_SlotReleaser.png, perf_before_improve_SlotReleaser.png > > > When test the performance of the ShortCircuit Read of the HDFS with YCSB, we > find that SlotReleaser of the ShortCircuitCache has some performance issue. > The problem is that, the qps of the slot releasing could only reach to 1000+ > while the qps of the slot allocating is ~3000. This means that the replica > info on datanode could not be released in time, which causes a lot of GCs and > finally full GCs. > > The fireflame graph shows that SlotReleaser spends a lot of time to do domain > socket connecting and throw/catching the exception when close the domain > socket and its streams. It doesn't make any sense to do the connecting and > closing each time. Each time when we connect to the domain socket, Datanode > allocates a new thread to free the slot. There are a lot of initializing > work, and it's costly. We need reuse the domain socket. > > After switch to reuse the domain socket(see diff attached), we get great > improvement(see the perf): > # without reusing the
[jira] [Resolved] (HDFS-13639) SlotReleaser is not fast enough
[ https://issues.apache.org/jira/browse/HDFS-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-13639. Fix Version/s: 3.4.0 Resolution: Fixed > SlotReleaser is not fast enough > --- > > Key: HDFS-13639 > URL: https://issues.apache.org/jira/browse/HDFS-13639 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Affects Versions: 2.4.0, 2.6.0, 3.0.2 > Environment: 1. YCSB: > {color:#00} recordcount=20 > fieldcount=1 > fieldlength=1000 > operationcount=1000 > > workload=com.yahoo.ycsb.workloads.CoreWorkload > > table=ycsb-test > columnfamily=C > readproportion=1 > updateproportion=0 > insertproportion=0 > scanproportion=0 > > maxscanlength=0 > requestdistribution=zipfian > > # default > readallfields=true > writeallfields=true > scanlengthdistribution=constan{color} > {color:#00}2. datanode:{color} > -Xmx2048m -Xms2048m -Xmn1024m -XX:MaxDirectMemorySize=1024m > -XX:MaxPermSize=256m -Xloggc:$run_dir/stdout/datanode_gc_${start_time}.log > -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError > -XX:HeapDumpPath=$log_dir -XX:+PrintGCApplicationStoppedTime > -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 > -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled > -XX:+CMSClassUnloadingEnabled -XX:CMSMaxAbortablePrecleanTime=1 > -XX:+CMSScavengeBeforeRemark -XX:+PrintPromotionFailure > -XX:+CMSConcurrentMTEnabled -XX:+ExplicitGCInvokesConcurrent > -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking > -verbose:gc -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps > {color:#00}3. regionserver:{color} > {color:#00}-Xmx10g -Xms10g -XX:MaxDirectMemorySize=10g > -XX:MaxGCPauseMillis=150 -XX:MaxTenuringThreshold=2 > -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=5 > -Xloggc:$run_dir/stdout/regionserver_gc_${start_time}.log -Xss256k > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=$log_dir -verbose:gc > -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime > -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintAdaptiveSizePolicy > -XX:+PrintTenuringDistribution -XX:+PrintSafepointStatistics > -XX:PrintSafepointStatisticsCount=1 -XX:PrintFLSStatistics=1 > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m > -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking > -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=65 > -XX:+ParallelRefProcEnabled -XX:ConcGCThreads=4 -XX:ParallelGCThreads=16 > -XX:G1HeapRegionSize=32m -XX:G1MixedGCCountTarget=64 > -XX:G1OldCSetRegionThresholdPercent=5{color} > {color:#00}block cache is disabled:{color}{color:#00} > hbase.bucketcache.size > 0.9 > {color} > >Reporter: Gang Xie >Assignee: Lisheng Sun >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-13639-2.4.diff, HDFS-13639.001.patch, > HDFS-13639.002.patch, ShortCircuitCache_new_slotReleaser.diff, > perf_after_improve_SlotReleaser.png, perf_before_improve_SlotReleaser.png > > > When test the performance of the ShortCircuit Read of the HDFS with YCSB, we > find that SlotReleaser of the ShortCircuitCache has some performance issue. > The problem is that, the qps of the slot releasing could only reach to 1000+ > while the qps of the slot allocating is ~3000. This means that the replica > info on datanode could not be released in time, which causes a lot of GCs and > finally full GCs. > > The fireflame graph shows that SlotReleaser spends a lot of time to do domain > socket connecting and throw/catching the exception when close the domain > socket and its streams. It doesn't make any sense to do the connecting and > closing each time. Each time when we connect to the domain socket, Datanode > allocates a new thread to free the slot. There are a lot of initializing > work, and it's costly. We need reuse the domain socket. > > After switch to reuse the domain socket(see diff attached), we get great > improvement(see the perf): > # without reusing the domain socket, the get qps of the YCSB getting worse > and worse, and after about 45 mins, full GC starts. When we reuse the domain > socket, no full GC found, and the stress test could be finished smoothly, the > qps of allocating and releasing match. > # Due to the datanode young GC, without the improvement, the YCSB get qps is > even smaller than the one with the improvement, ~3700 VS ~4200. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113467#comment-17113467 ] Jim Brennan commented on HDFS-13183: I am +1 (non-binding) on the second addendum patch. > Standby NameNode process getBlocks request to reduce Active load > > > Key: HDFS-13183 > URL: https://issues.apache.org/jira/browse/HDFS-13183 > Project: Hadoop HDFS > Issue Type: New Feature > Components: balancer mover, namenode >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.1, 3.4.0 > > Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, > HDFS-13183-trunk.003.patch, HDFS-13183.004.patch, HDFS-13183.005.patch, > HDFS-13183.006.patch, HDFS-13183.007.patch, HDFS-13183.addendum.patch, > HDFS-13183.addendum.patch > > > The performance of Active NameNode could be impact when {{Balancer}} requests > #getBlocks, since query blocks of overly full DNs performance is extremely > inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} > hold read lock for long time. In extreme case, all handlers of Active > NameNode RPC server are occupied by one reader > {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active > NameNode enter a state of false death for number of seconds even for minutes. > The similar performance concerns of Balancer have reported by HDFS-9412, > HDFS-7967, etc. > If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up > the progress of balancing and reduce performance impact to Active NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14546) Document block placement policies
[ https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113457#comment-17113457 ] Hadoop QA commented on HDFS-14546: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 39m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29348/artifact/out/Dockerfile | | JIRA Issue | HDFS-14546 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12989409/HDFS-14546-08.patch | | Optional Tests | dupname asflicense mvnsite markdownlint | | uname | Linux 6f465ec2d81c 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ac4540dd8e2 | | Max. process+thread count | 311 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29348/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. > Document block placement policies > - > > Key: HDFS-14546 > URL: https://issues.apache.org/jira/browse/HDFS-14546 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Amithsha >Priority: Major > Labels: documentation > Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, > HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, > HDFS-14546-06.patch, HDFS-14546-07.patch, HDFS-14546-08.patch, > HdfsDesign.patch > > > Currently, all the documentation refers to the default block placement policy. > However, over time there have been new policies: > * BlockPlacementPolicyRackFaultTolerant (HDFS-7891) > * BlockPlacementPolicyWithNodeGroup (HDFS-3601) > * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006) > We should update the documentation to refer to them explaining their > particularities and probably how to setup each one of them. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15362) FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all distinct blocks
[ https://issues.apache.org/jira/browse/HDFS-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113450#comment-17113450 ] hemanthboyina commented on HDFS-15362: -- thanks [~elgoiri] for the review i have updated the patch , please review > FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all > distinct blocks > -- > > Key: HDFS-15362 > URL: https://issues.apache.org/jira/browse/HDFS-15362 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15362.001.patch, HDFS-15362.002.patch > > > FileWithSnapshotFeature#updateQuotaAndCollectBlocks uses list to collect > blocks > {code:java} > List allBlocks = new ArrayList(); > if (file.getBlocks() != null) { > allBlocks.addAll(Arrays.asList(file.getBlocks())); > }{code} > INodeFile#storagespaceConsumedContiguous collects all distinct blocks by set > {code:java} > // Collect all distinct blocks > Set allBlocks = new HashSet<>(Arrays.asList(getBlocks())); > DiffList diffs = sf.getDiffs().asList(); > for(FileDiff diff : diffs) { >BlockInfo[] diffBlocks = diff.getBlocks(); >if (diffBlocks != null) { > allBlocks.addAll(Arrays.asList(diffBlocks)); > } {code} > but on updating the reclaim context we subtract these both , so wrong quota > value can be updated > {code:java} > QuotaCounts current = file.storagespaceConsumed(bsp); > reclaimContext.quotaDelta().add(oldCounts.subtract(current)); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15362) FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all distinct blocks
[ https://issues.apache.org/jira/browse/HDFS-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15362: - Attachment: HDFS-15362.002.patch > FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all > distinct blocks > -- > > Key: HDFS-15362 > URL: https://issues.apache.org/jira/browse/HDFS-15362 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15362.001.patch, HDFS-15362.002.patch > > > FileWithSnapshotFeature#updateQuotaAndCollectBlocks uses list to collect > blocks > {code:java} > List allBlocks = new ArrayList(); > if (file.getBlocks() != null) { > allBlocks.addAll(Arrays.asList(file.getBlocks())); > }{code} > INodeFile#storagespaceConsumedContiguous collects all distinct blocks by set > {code:java} > // Collect all distinct blocks > Set allBlocks = new HashSet<>(Arrays.asList(getBlocks())); > DiffList diffs = sf.getDiffs().asList(); > for(FileDiff diff : diffs) { >BlockInfo[] diffBlocks = diff.getBlocks(); >if (diffBlocks != null) { > allBlocks.addAll(Arrays.asList(diffBlocks)); > } {code} > but on updating the reclaim context we subtract these both , so wrong quota > value can be updated > {code:java} > QuotaCounts current = file.storagespaceConsumed(bsp); > reclaimContext.quotaDelta().add(oldCounts.subtract(current)); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
[ https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113424#comment-17113424 ] Chen Liang commented on HDFS-15368: --- [~hexiaoqiao] thanks for reporting and looking into! It is actually expected to always hit idx=2 observer as long as it's running. Reason is that, without NameNode randomization, client will always try first Observer (idx2 in this case) before the second (idx3 here), unless first observer failed to respond. So in the case of withObserverFailure = false, it should be Observer with idx=2 being the one responding all the time. I will need to look into this. It would be helpful if you have an error stack trace. > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally > > > Key: HDFS-15368 > URL: https://issues.apache.org/jira/browse/HDFS-15368 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Labels: balancer, test > Attachments: HDFS-15368.001.patch > > > When I am working on HDFS-13183, I found that > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally, > because the following code segment. Consider there are 1 ANN + 1 SBN + 2ONN, > when invoke getBlocks with opening Observer Read feature, it could request > any one of two ObserverNN based on my observation. So only verify the first > ObserverNN and check times of invoke #getBlocks is not expected. > {code:java} > for (int i = 0; i < cluster.getNumNameNodes(); i++) { > // First observer node is at idx 2, or 3 if 2 has been shut down > // It should get both getBlocks calls, all other NNs should see 0 > calls > int expectedObserverIdx = withObserverFailure ? 3 : 2; > int expectedCount = (i == expectedObserverIdx) ? 2 : 0; > verify(namesystemSpies.get(i), times(expectedCount)) > .getBlocks(any(), anyLong(), anyLong()); > } > {code} > cc [~xkrogen],[~weichiu]. I am not very familiar for Observer Read feature, > would you like give some suggestions? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP
[ https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113418#comment-17113418 ] Ayush Saxena commented on HDFS-15288: - yes, Its HDFS-14546, where the BPP were getting documented, but that is on hold, I have pinged there, So this one doesn't gets unnoticed once done. :) > Add Available Space Rack Fault Tolerant BPP > --- > > Key: HDFS-15288 > URL: https://issues.apache.org/jira/browse/HDFS-15288 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, > HDFS-15288-03.patch > > > The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block > Placement policy, which makes it apt for Replicated files. But not very > efficient for EC files, which by default use. > {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having > similar optimization as ASBPP where as keeping the spread of Blocks to max > racks, i.e as RackFaultTolerantBPP. > This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the > {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of > optimization same as ASBPP -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14546) Document block placement policies
[ https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113416#comment-17113416 ] Ayush Saxena commented on HDFS-14546: - [~Amithsha] any progress on this, Please include HDFS-15288 too, Let me know if you are facing any trouble, Will try to help you out. > Document block placement policies > - > > Key: HDFS-14546 > URL: https://issues.apache.org/jira/browse/HDFS-14546 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Amithsha >Priority: Major > Labels: documentation > Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, > HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, > HDFS-14546-06.patch, HDFS-14546-07.patch, HDFS-14546-08.patch, > HdfsDesign.patch > > > Currently, all the documentation refers to the default block placement policy. > However, over time there have been new policies: > * BlockPlacementPolicyRackFaultTolerant (HDFS-7891) > * BlockPlacementPolicyWithNodeGroup (HDFS-3601) > * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006) > We should update the documentation to refer to them explaining their > particularities and probably how to setup each one of them. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15362) FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all distinct blocks
[ https://issues.apache.org/jira/browse/HDFS-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113402#comment-17113402 ] Íñigo Goiri commented on HDFS-15362: Can we add a couple more updateQuotaAndCollectBlocks() calls so we can check for a couple values and not just the current 0? > FileWithSnapshotFeature#updateQuotaAndCollectBlocks should collect all > distinct blocks > -- > > Key: HDFS-15362 > URL: https://issues.apache.org/jira/browse/HDFS-15362 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-15362.001.patch > > > FileWithSnapshotFeature#updateQuotaAndCollectBlocks uses list to collect > blocks > {code:java} > List allBlocks = new ArrayList(); > if (file.getBlocks() != null) { > allBlocks.addAll(Arrays.asList(file.getBlocks())); > }{code} > INodeFile#storagespaceConsumedContiguous collects all distinct blocks by set > {code:java} > // Collect all distinct blocks > Set allBlocks = new HashSet<>(Arrays.asList(getBlocks())); > DiffList diffs = sf.getDiffs().asList(); > for(FileDiff diff : diffs) { >BlockInfo[] diffBlocks = diff.getBlocks(); >if (diffBlocks != null) { > allBlocks.addAll(Arrays.asList(diffBlocks)); > } {code} > but on updating the reclaim context we subtract these both , so wrong quota > value can be updated > {code:java} > QuotaCounts current = file.storagespaceConsumed(bsp); > reclaimContext.quotaDelta().add(oldCounts.subtract(current)); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15355) Make the default block storage policy ID configurable
[ https://issues.apache.org/jira/browse/HDFS-15355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113398#comment-17113398 ] Íñigo Goiri commented on HDFS-15355: Can we update ArchivalStorage.md and related too? > Make the default block storage policy ID configurable > - > > Key: HDFS-15355 > URL: https://issues.apache.org/jira/browse/HDFS-15355 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15355.001.patch, HDFS-15355.002.patch, > HDFS-15355.003.patch, HDFS-15355.004.patch, HDFS-15355.005.patch, > HDFS-15355.006.patch, HDFS-15355.007.patch, HDFS-15355.008.patch, > HDFS-15355.009.patch, HDFS-15355.010.patch > > > Make the default block storage policy ID configurable. Sometime we want to > use different storage policy ID from the startup of cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP
[ https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113394#comment-17113394 ] Íñigo Goiri commented on HDFS-15288: I cannot find it now but I remember we were adding documentation regarding the policies. Can we update (or rescue) that? > Add Available Space Rack Fault Tolerant BPP > --- > > Key: HDFS-15288 > URL: https://issues.apache.org/jira/browse/HDFS-15288 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, > HDFS-15288-03.patch > > > The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block > Placement policy, which makes it apt for Replicated files. But not very > efficient for EC files, which by default use. > {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having > similar optimization as ASBPP where as keeping the spread of Blocks to max > racks, i.e as RackFaultTolerantBPP. > This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the > {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of > optimization same as ASBPP -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15093) RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is specified
[ https://issues.apache.org/jira/browse/HDFS-15093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113392#comment-17113392 ] Íñigo Goiri commented on HDFS-15093: +1 on [^HDFS-15093-04.patch]. > RENAME.TO_TRASH is ignored When RENAME.OVERWRITE is specified > - > > Key: HDFS-15093 > URL: https://issues.apache.org/jira/browse/HDFS-15093 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Harshakiran Reddy >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-15093-01.patch, HDFS-15093-02.patch, > HDFS-15093-03.patch, HDFS-15093-04.patch > > > When Rename Overwrite flag is specified the To_TRASH option gets silently > ignored. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
[ https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113269#comment-17113269 ] Erik Krogen commented on HDFS-15368: cc [~vagarychen] and [~shv] > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally > > > Key: HDFS-15368 > URL: https://issues.apache.org/jira/browse/HDFS-15368 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Labels: balancer, test > Attachments: HDFS-15368.001.patch > > > When I am working on HDFS-13183, I found that > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally, > because the following code segment. Consider there are 1 ANN + 1 SBN + 2ONN, > when invoke getBlocks with opening Observer Read feature, it could request > any one of two ObserverNN based on my observation. So only verify the first > ObserverNN and check times of invoke #getBlocks is not expected. > {code:java} > for (int i = 0; i < cluster.getNumNameNodes(); i++) { > // First observer node is at idx 2, or 3 if 2 has been shut down > // It should get both getBlocks calls, all other NNs should see 0 > calls > int expectedObserverIdx = withObserverFailure ? 3 : 2; > int expectedCount = (i == expectedObserverIdx) ? 2 : 0; > verify(namesystemSpies.get(i), times(expectedCount)) > .getBlocks(any(), anyLong(), anyLong()); > } > {code} > cc [~xkrogen],[~weichiu]. I am not very familiar for Observer Read feature, > would you like give some suggestions? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
[ https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113265#comment-17113265 ] Hadoop QA commented on HDFS-15368: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 59s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 23s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}183m 14s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29347/artifact/out/Dockerfile | | JIRA Issue | HDFS-15368 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13003617/HDFS-15368.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b29d6dc50d12 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ac4540dd8e2 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | unit |
[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113218#comment-17113218 ] Jim Brennan commented on HDFS-13183: [~hexiaoqiao] thanks for checking TestBalancer and fixing the problem causing TestBalancerWithNodeGroup to fail. I think a separate Jira for the TestBalancerWithHANameNodes#testBalancerWithObserver failures is appropriate. > Standby NameNode process getBlocks request to reduce Active load > > > Key: HDFS-13183 > URL: https://issues.apache.org/jira/browse/HDFS-13183 > Project: Hadoop HDFS > Issue Type: New Feature > Components: balancer mover, namenode >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.1, 3.4.0 > > Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, > HDFS-13183-trunk.003.patch, HDFS-13183.004.patch, HDFS-13183.005.patch, > HDFS-13183.006.patch, HDFS-13183.007.patch, HDFS-13183.addendum.patch, > HDFS-13183.addendum.patch > > > The performance of Active NameNode could be impact when {{Balancer}} requests > #getBlocks, since query blocks of overly full DNs performance is extremely > inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} > hold read lock for long time. In extreme case, all handlers of Active > NameNode RPC server are occupied by one reader > {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active > NameNode enter a state of false death for number of seconds even for minutes. > The similar performance concerns of Balancer have reported by HDFS-9412, > HDFS-7967, etc. > If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up > the progress of balancing and reduce performance impact to Active NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15370) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory
[ https://issues.apache.org/jira/browse/HDFS-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Srinivasu Majeti updated HDFS-15370: Summary: listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation for isDirectory (was: listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation) > listStatus and getFileStatus behave inconsistent in the case of ViewFs > implementation for isDirectory > - > > Key: HDFS-15370 > URL: https://issues.apache.org/jira/browse/HDFS-15370 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0, 3.1.0 >Reporter: Srinivasu Majeti >Priority: Major > Labels: viewfs > > listStatus implementation in ViewFs and getFileStatus does not return > consistent values for an element on isDirectory value. listStatus returns > isDirectory of all softlinks as false and getFileStatus returns isDirectory > as true. > {code:java} > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" > FileStatus of viewfs://c3121/testme21may isDirectory:false > FileStatus of viewfs://c3121/tmp isDirectory:false > FileStatus of viewfs://c3121/foo isDirectory:false > FileStatus of viewfs://c3121/tmp21may isDirectory:false > FileStatus of viewfs://c3121/testme isDirectory:false > FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false > FileStatus of / isDirectory:true > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 > FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false > FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true > FileStatus of /testme2 isDirectory:true <--- returns true > [hdfs@c3121-node2 ~]$ {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15370) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation
[ https://issues.apache.org/jira/browse/HDFS-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Srinivasu Majeti updated HDFS-15370: Description: listStatus implementation in ViewFs and getFileStatus does not return consistent values for an element on isDirectory value. listStatus returns isDirectory of all softlinks as false and getFileStatus returns isDirectory as true. {code:java} [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" FileStatus of viewfs://c3121/testme21may isDirectory:false FileStatus of viewfs://c3121/tmp isDirectory:false FileStatus of viewfs://c3121/foo isDirectory:false FileStatus of viewfs://c3121/tmp21may isDirectory:false FileStatus of viewfs://c3121/testme isDirectory:false FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false FileStatus of / isDirectory:true [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true FileStatus of /testme2 isDirectory:true <--- returns true [hdfs@c3121-node2 ~]$ {code} was: listStatus implementation in ViewFs and getFileStatus does not return consistent values for an element. {code:java} [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" FileStatus of viewfs://c3121/testme21may isDirectory:false FileStatus of viewfs://c3121/tmp isDirectory:false FileStatus of viewfs://c3121/foo isDirectory:false FileStatus of viewfs://c3121/tmp21may isDirectory:false FileStatus of viewfs://c3121/testme isDirectory:false FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false FileStatus of / isDirectory:true [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true FileStatus of /testme2 isDirectory:true <--- returns true [hdfs@c3121-node2 ~]$ {code} > listStatus and getFileStatus behave inconsistent in the case of ViewFs > implementation > - > > Key: HDFS-15370 > URL: https://issues.apache.org/jira/browse/HDFS-15370 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0, 3.1.0 >Reporter: Srinivasu Majeti >Priority: Major > Labels: viewfs > > listStatus implementation in ViewFs and getFileStatus does not return > consistent values for an element on isDirectory value. listStatus returns > isDirectory of all softlinks as false and getFileStatus returns isDirectory > as true. > {code:java} > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" > FileStatus of viewfs://c3121/testme21may isDirectory:false > FileStatus of viewfs://c3121/tmp isDirectory:false > FileStatus of viewfs://c3121/foo isDirectory:false > FileStatus of viewfs://c3121/tmp21may isDirectory:false > FileStatus of viewfs://c3121/testme isDirectory:false > FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false > FileStatus of / isDirectory:true > [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop > classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 > FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false > FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true > FileStatus of /testme2 isDirectory:true <--- returns true > [hdfs@c3121-node2 ~]$ {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15370) listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation
Srinivasu Majeti created HDFS-15370: --- Summary: listStatus and getFileStatus behave inconsistent in the case of ViewFs implementation Key: HDFS-15370 URL: https://issues.apache.org/jira/browse/HDFS-15370 Project: Hadoop HDFS Issue Type: Bug Components: hdfs Affects Versions: 3.1.0, 3.0.0 Reporter: Srinivasu Majeti listStatus implementation in ViewFs and getFileStatus does not return consistent values for an element. {code:java} [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus "/" FileStatus of viewfs://c3121/testme21may isDirectory:false FileStatus of viewfs://c3121/tmp isDirectory:false FileStatus of viewfs://c3121/foo isDirectory:false FileStatus of viewfs://c3121/tmp21may isDirectory:false FileStatus of viewfs://c3121/testme isDirectory:false FileStatus of viewfs://c3121/testme2 isDirectory:false <--- returns false FileStatus of / isDirectory:true [hdfs@c3121-node2 ~]$ /usr/jdk64/jdk1.8.0_112/bin/java -cp `hadoop classpath`:./hdfs-append-1.0-SNAPSHOT.jar LauncherGetFileStatus /testme2 FileStatus of viewfs://c3121/testme2/dist-copynativelibs.sh isDirectory:false FileStatus of viewfs://c3121/testme2/newfolder isDirectory:true FileStatus of /testme2 isDirectory:true <--- returns true [hdfs@c3121-node2 ~]$ {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15369) Refactor method VolumeScanner#runLoop()
[ https://issues.apache.org/jira/browse/HDFS-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113146#comment-17113146 ] Hadoop QA commented on HDFS-15369: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 59s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 27s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 14 unchanged - 1 fixed = 14 total (was 15) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 8s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}199m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLinkFallback | | | hadoop.fs.permission.TestStickyBit | | | hadoop.hdfs.web.TestWebHDFSAcl | | | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.fs.TestSymlinkHdfsFileSystem | | | hadoop.fs.viewfs.TestViewFileSystemWithXAttrs | | | hadoop.fs.viewfs.TestViewFileSystemHdfs | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme | | | hadoop.hdfs.web.TestWebHDFSXAttr | | | hadoop.hdfs.TestStripedFileAppend | | | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup | |
[jira] [Commented] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
[ https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113081#comment-17113081 ] Hadoop QA commented on HDFS-15368: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 54s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 8s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}184m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestQuota | | | hadoop.hdfs.TestModTime | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestFileChecksumCompositeCrc | | | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot | | | hadoop.hdfs.TestRestartDFS | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.TestDFSRollback | | | hadoop.hdfs.TestFileAppend | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestWriteReadStripedFile | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy | | | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup | | | hadoop.hdfs.TestDistributedFileSystemWithECFile | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.TestEncryptionZonesWithKMS | \\ \\ ||
[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113049#comment-17113049 ] Hadoop QA commented on HDFS-13183: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 23s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 53s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29344/artifact/out/Dockerfile | | JIRA Issue | HDFS-13183 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13003566/HDFS-13183.addendum.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7a8ee7089793 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 1a3c6bb33b6 | | Default Java | Private
[jira] [Commented] (HDFS-11633) FSImage failover disables all erasure coding policies
[ https://issues.apache.org/jira/browse/HDFS-11633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112998#comment-17112998 ] Rungroj Maipradit commented on HDFS-11633: -- The 002 patch by [~weichiu] seems to be prepared as a workaround for HDFS-7337. Now HDFS-7337 had been resolved. Will "enabledPoliciesByName.clear();" come back to the clear function? I also mentioned this issue in HDFS-15361. > FSImage failover disables all erasure coding policies > -- > > Key: HDFS-11633 > URL: https://issues.apache.org/jira/browse/HDFS-11633 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, namenode >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Critical > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-11633.001.patch, HDFS-11633.002.patch, > HDFS-11633.test.patch > > > If NameNode fails to load the fsimage in the first NameNode metadata > directory, it accidentally clears all enabled erasure coding policies in > ErasureCodingPolicyManager. > Even if the NameNode configures multiple fsimage metadata directory and it > successfully loads the second one, enabled erasure coding policies are not > restored. > In the current implementation, we do not have a ErasureCodingPolicyManager > section in fsimage, so a fsimage reload does not reload ECPM. > The easiest fix before ECPM section is implemented, is don't clear ECPM when > FSNamesystem is cleared. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112994#comment-17112994 ] Hadoop QA commented on HDFS-15098: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} prototool {color} | {color:blue} 0m 0s{color} | {color:blue} prototool was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 49s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 6s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 59s{color} | {color:red} root generated 31 new + 131 unchanged - 31 fixed = 162 total (was 162) {color} | | {color:green}+1{color} | {color:green} golang {color} | {color:green} 17m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 59s{color} | {color:red} root generated 4 new + 1865 unchanged - 0 fixed = 1869 total (was 1865) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 57s{color} | {color:orange} root: The patch generated 16 new + 211 unchanged - 5 fixed = 227 total (was 216) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 53s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 9 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 36s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 6s{color} | {color:red} The patch generated 5 ASF License warnings. {color}
[jira] [Updated] (HDFS-15369) Refactor method VolumeScanner#runLoop()
[ https://issues.apache.org/jira/browse/HDFS-15369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15369: Attachment: HDFS-15369.001.patch Status: Patch Available (was: Open) > Refactor method VolumeScanner#runLoop() > > > Key: HDFS-15369 > URL: https://issues.apache.org/jira/browse/HDFS-15369 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15369.001.patch > > > After HDFS-15207 the method VolumeScanner#runLoop() is quite long. seperate a > new private method getNextBlockToScan. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15369) Refactor method VolumeScanner#runLoop()
Yang Yun created HDFS-15369: --- Summary: Refactor method VolumeScanner#runLoop() Key: HDFS-15369 URL: https://issues.apache.org/jira/browse/HDFS-15369 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Reporter: Yang Yun Assignee: Yang Yun After HDFS-15207 the method VolumeScanner#runLoop() is quite long. seperate a new private method getNextBlockToScan. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
[ https://issues.apache.org/jira/browse/HDFS-15368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15368: --- Attachment: HDFS-15368.001.patch Status: Patch Available (was: Open) submit demo patch and try to trigger yetus. > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally > > > Key: HDFS-15368 > URL: https://issues.apache.org/jira/browse/HDFS-15368 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Labels: balancer, test > Attachments: HDFS-15368.001.patch > > > When I am working on HDFS-13183, I found that > TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally, > because the following code segment. Consider there are 1 ANN + 1 SBN + 2ONN, > when invoke getBlocks with opening Observer Read feature, it could request > any one of two ObserverNN based on my observation. So only verify the first > ObserverNN and check times of invoke #getBlocks is not expected. > {code:java} > for (int i = 0; i < cluster.getNumNameNodes(); i++) { > // First observer node is at idx 2, or 3 if 2 has been shut down > // It should get both getBlocks calls, all other NNs should see 0 > calls > int expectedObserverIdx = withObserverFailure ? 3 : 2; > int expectedCount = (i == expectedObserverIdx) ? 2 : 0; > verify(namesystemSpies.get(i), times(expectedCount)) > .getBlocks(any(), anyLong(), anyLong()); > } > {code} > cc [~xkrogen],[~weichiu]. I am not very familiar for Observer Read feature, > would you like give some suggestions? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112893#comment-17112893 ] lindongdong commented on HDFS-15098: Hi, [~zZtai] , why need the 2 steps: 1.download Bouncy Castle Crypto APIs from bouncycastle.org [https://bouncycastle.org/download/bcprov-ext-jdk15on-165.jar] 2.Configure JDK Place bcprov-ext-jdk15on-165.jar in $JAVA_HOME/jre/lib/ext directory, add "security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider" to $JAVA_HOME/jre/lib/security/java.security file IMO, the openssl supports the SM4 fully, so we do't need the jar. > Add SM4 encryption method for HDFS > -- > > Key: HDFS-15098 > URL: https://issues.apache.org/jira/browse/HDFS-15098 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: liusheng >Assignee: zZtai >Priority: Major > Labels: sm4 > Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, > HDFS-15098.003.patch, HDFS-15098.004.patch > > > SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard > for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). > SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far > been rejected by ISO. One of the reasons for the rejection has been > opposition to the WAPI fast-track proposal by the IEEE. please see: > [https://en.wikipedia.org/wiki/SM4_(cipher)] > > *Use sm4 on hdfs as follows:* > 1.download Bouncy Castle Crypto APIs from bouncycastle.org > [https://bouncycastle.org/download/bcprov-ext-jdk15on-165.jar] > 2.Configure JDK > Place bcprov-ext-jdk15on-165.jar in $JAVA_HOME/jre/lib/ext directory, > add "security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider" > to $JAVA_HOME/jre/lib/security/java.security file > 3.Configure Hadoop KMS > 4.test HDFS sm4 > hadoop key create key1 -cipher 'SM4/CTR/NoPadding' > hdfs dfs -mkdir /benchmarks > hdfs crypto -createZone -keyName key1 -path /benchmarks > *requires:* > 1.openssl version >=1.1.1 > 2.configure Bouncy Castle Crypto on JDK -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15368) TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally
Xiaoqiao He created HDFS-15368: -- Summary: TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally Key: HDFS-15368 URL: https://issues.apache.org/jira/browse/HDFS-15368 Project: Hadoop HDFS Issue Type: Improvement Reporter: Xiaoqiao He Assignee: Xiaoqiao He When I am working on HDFS-13183, I found that TestBalancerWithHANameNodes#testBalancerWithObserver failed occasionally, because the following code segment. Consider there are 1 ANN + 1 SBN + 2ONN, when invoke getBlocks with opening Observer Read feature, it could request any one of two ObserverNN based on my observation. So only verify the first ObserverNN and check times of invoke #getBlocks is not expected. {code:java} for (int i = 0; i < cluster.getNumNameNodes(); i++) { // First observer node is at idx 2, or 3 if 2 has been shut down // It should get both getBlocks calls, all other NNs should see 0 calls int expectedObserverIdx = withObserverFailure ? 3 : 2; int expectedCount = (i == expectedObserverIdx) ? 2 : 0; verify(namesystemSpies.get(i), times(expectedCount)) .getBlocks(any(), anyLong(), anyLong()); } {code} cc [~xkrogen],[~weichiu]. I am not very familiar for Observer Read feature, would you like give some suggestions? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112857#comment-17112857 ] Xiaoqiao He commented on HDFS-13183: After dig deep about BalancerWithObserver, the root cause of failed unit test TestBalancerWithHANameNodes#testBalancerWithObserver is that verify #getBlocks invoke times, as the following code segment. When open Observer Read feature, seems it does not request the first Observer NameNode every time. When there are two Observer NameNodes are alive, it could request random one in this case. So it is 50% possible to execute failed. IMO it is not related to this changes. I would like to file another JIRA to trace it. {code:java} doTest(conf); for (int i = 0; i < cluster.getNumNameNodes(); i++) { // First observer node is at idx 2, or 3 if 2 has been shut down // It should get both getBlocks calls, all other NNs should see 0 calls int expectedObserverIdx = withObserverFailure ? 3 : 2; int expectedCount = (i == expectedObserverIdx) ? 2 : 0; verify(namesystemSpies.get(i), times(expectedCount)) .getBlocks(any(), anyLong(), anyLong()); } {code} try to trigger yetus manually, and check the result again. > Standby NameNode process getBlocks request to reduce Active load > > > Key: HDFS-13183 > URL: https://issues.apache.org/jira/browse/HDFS-13183 > Project: Hadoop HDFS > Issue Type: New Feature > Components: balancer mover, namenode >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.1, 3.4.0 > > Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, > HDFS-13183-trunk.003.patch, HDFS-13183.004.patch, HDFS-13183.005.patch, > HDFS-13183.006.patch, HDFS-13183.007.patch, HDFS-13183.addendum.patch, > HDFS-13183.addendum.patch > > > The performance of Active NameNode could be impact when {{Balancer}} requests > #getBlocks, since query blocks of overly full DNs performance is extremely > inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} > hold read lock for long time. In extreme case, all handlers of Active > NameNode RPC server are occupied by one reader > {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active > NameNode enter a state of false death for number of seconds even for minutes. > The similar performance concerns of Balancer have reported by HDFS-9412, > HDFS-7967, etc. > If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up > the progress of balancing and reduce performance impact to Active NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15346) RBF: Balance data across federation namespaces with DistCp and snapshot diff / Step 2: The DistCpFedBalance.
[ https://issues.apache.org/jira/browse/HDFS-15346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112855#comment-17112855 ] Jinglun commented on HDFS-15346: Hi [~linyiqun], thanks your reminding. Will upload the patches this week. > RBF: Balance data across federation namespaces with DistCp and snapshot diff > / Step 2: The DistCpFedBalance. > > > Key: HDFS-15346 > URL: https://issues.apache.org/jira/browse/HDFS-15346 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > > Patch in HDFS-15294 is too big to review so we split it into 2 patches. This > is the second one. Detail can be found at HDFS-15294. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15098) Add SM4 encryption method for HDFS
[ https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zZtai updated HDFS-15098: - Attachment: HDFS-15098.004.patch Status: Patch Available (was: Open) Modify the checkstyle and test case about SM4 > Add SM4 encryption method for HDFS > -- > > Key: HDFS-15098 > URL: https://issues.apache.org/jira/browse/HDFS-15098 > Project: Hadoop HDFS > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: liusheng >Assignee: zZtai >Priority: Major > Labels: sm4 > Attachments: HDFS-15098.001.patch, HDFS-15098.002.patch, > HDFS-15098.003.patch, HDFS-15098.004.patch > > > SM4 (formerly SMS4)is a block cipher used in the Chinese National Standard > for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). > SM4 was a cipher proposed to for the IEEE 802.11i standard, but has so far > been rejected by ISO. One of the reasons for the rejection has been > opposition to the WAPI fast-track proposal by the IEEE. please see: > [https://en.wikipedia.org/wiki/SM4_(cipher)] > > *Use sm4 on hdfs as follows:* > 1.download Bouncy Castle Crypto APIs from bouncycastle.org > [https://bouncycastle.org/download/bcprov-ext-jdk15on-165.jar] > 2.Configure JDK > Place bcprov-ext-jdk15on-165.jar in $JAVA_HOME/jre/lib/ext directory, > add "security.provider.10=org.bouncycastle.jce.provider.BouncyCastleProvider" > to $JAVA_HOME/jre/lib/security/java.security file > 3.Configure Hadoop KMS > 4.test HDFS sm4 > hadoop key create key1 -cipher 'SM4/CTR/NoPadding' > hdfs dfs -mkdir /benchmarks > hdfs crypto -createZone -keyName key1 -path /benchmarks > *requires:* > 1.openssl version >=1.1.1 > 2.configure Bouncy Castle Crypto on JDK -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org