[jira] [Updated] (HDFS-15040) RBF: Secured Router should not run when SecretManager is not running
[ https://issues.apache.org/jira/browse/HDFS-15040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-15040: Status: Patch Available (was: Open) > RBF: Secured Router should not run when SecretManager is not running > > > Key: HDFS-15040 > URL: https://issues.apache.org/jira/browse/HDFS-15040 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > > We have faced an issue that router is running while SecretManager is not > running. HDFS-14835 is a similar fix which checks whether SecreatManager is > null or not. But it didn't cover this case. So we also need to check the > running status. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15033) Support to save replica cached files to other place and make expired time configurable
[ https://issues.apache.org/jira/browse/HDFS-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15033: Description: For slow volume with many replicas, add an option to save the replica files to high-speed disk and speed up the saving. Also add a option to change the expire time of the replica file. was: For slow volume with many replicas, the ShutdownHook may be teminated without saving all replica. add an option to save the replica files to high-speed disk and speed up the saving. Also add a option to change the expire time of the replica file. > Support to save replica cached files to other place and make expired time > configurable > -- > > Key: HDFS-15033 > URL: https://issues.apache.org/jira/browse/HDFS-15033 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15033.patch, HDFS-15033.patch, HDFS-15033.patch > > > For slow volume with many replicas, add an option to save the replica files > to high-speed disk and speed up the saving. > Also add a option to change the expire time of the replica file. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15040) RBF: Secured Router should not run when SecretManager is not running
Takanobu Asanuma created HDFS-15040: --- Summary: RBF: Secured Router should not run when SecretManager is not running Key: HDFS-15040 URL: https://issues.apache.org/jira/browse/HDFS-15040 Project: Hadoop HDFS Issue Type: Bug Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma We have faced an issue that router is running while SecretManager is not running. HDFS-14835 is a similar fix which checks whether SecreatManager is null or not. But it didn't cover this case. So we also need to check the running status. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15039) Cache meta file length of FinalizedReplica to reduce call File.length()
[ https://issues.apache.org/jira/browse/HDFS-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991172#comment-16991172 ] Hadoop QA commented on HDFS-15039: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 44 unchanged - 0 fixed = 47 total (was 44) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDeadNodeDetection | | | hadoop.hdfs.server.namenode.TestFsck | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HDFS-15039 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12988286/HDFS-15039.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2b20d8e155b3 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8dffd8d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/28484/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/28484/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991159#comment-16991159 ] hemanthboyina commented on HDFS-15038: -- have checked with HDFS-15009 , it is not causing this issue . will check the root cause and let you know . thanks . > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15038) TestFsck testFsckListCorruptSnapshotFiles is failing in trunk
[ https://issues.apache.org/jira/browse/HDFS-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991130#comment-16991130 ] Ayush Saxena commented on HDFS-15038: - Is it due to HDFS-15009, Planned to backport that to lower branches. If it is because of that let me know, I haven't checked it but... Is it broken test or broken functionality, If not just test, let me know I will reopen HDFS-15009 > TestFsck testFsckListCorruptSnapshotFiles is failing in trunk > - > > Key: HDFS-15038 > URL: https://issues.apache.org/jira/browse/HDFS-15038 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: hemanthboyina >Assignee: hemanthboyina >Priority: Major > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28481/testReport/] > > [https://builds.apache.org/job/PreCommit-HDFS-Build/28482/testReport/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
[ https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991112#comment-16991112 ] Xieming Li commented on HDFS-14983: --- I think those errors are irrelevant to my changes except for that CheckStyle one. CheckStyle error is generated because: https://issues.apache.org/jira/browse/HDFS-14983?focusedCommentId=16990369=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16990369 > RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option > --- > > Key: HDFS-14983 > URL: https://issues.apache.org/jira/browse/HDFS-14983 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Minor > Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, > HDFS-14983.draft.001.patch > > > NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration > without restarting but DFSRouter cannot. It would be better for DFSRouter to > have such functionality to be compatible with NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
[ https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xieming Li updated HDFS-14983: -- Status: Patch Available (was: Open) > RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option > --- > > Key: HDFS-14983 > URL: https://issues.apache.org/jira/browse/HDFS-14983 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Minor > Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, > HDFS-14983.draft.001.patch > > > NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration > without restarting but DFSRouter cannot. It would be better for DFSRouter to > have such functionality to be compatible with NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14983) RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option
[ https://issues.apache.org/jira/browse/HDFS-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xieming Li updated HDFS-14983: -- Status: Open (was: Patch Available) > RBF: Add dfsrouteradmin -refreshSuperUserGroupsConfiguration command option > --- > > Key: HDFS-14983 > URL: https://issues.apache.org/jira/browse/HDFS-14983 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xieming Li >Priority: Minor > Attachments: HDFS-14983.002.patch, HDFS-14983.003.patch, > HDFS-14983.draft.001.patch > > > NameNode can update proxyuser config by -refreshSuperUserGroupsConfiguration > without restarting but DFSRouter cannot. It would be better for DFSRouter to > have such functionality to be compatible with NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14546) Document block placement policies
[ https://issues.apache.org/jira/browse/HDFS-14546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991110#comment-16991110 ] Amithsha commented on HDFS-14546: - [~weichiu] FYI Removed the PR from JIRA. > Document block placement policies > - > > Key: HDFS-14546 > URL: https://issues.apache.org/jira/browse/HDFS-14546 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Íñigo Goiri >Assignee: Amithsha >Priority: Major > Labels: documentation > Attachments: HDFS-14546-01.patch, HDFS-14546-02.patch, > HDFS-14546-03.patch, HDFS-14546-04.patch, HDFS-14546-05.patch, > HDFS-14546-06.patch, HDFS-14546-07.patch, HdfsDesign.patch > > > Currently, all the documentation refers to the default block placement policy. > However, over time there have been new policies: > * BlockPlacementPolicyRackFaultTolerant (HDFS-7891) > * BlockPlacementPolicyWithNodeGroup (HDFS-3601) > * BlockPlacementPolicyWithUpgradeDomain (HDFS-9006) > We should update the documentation to refer to them explaining their > particularities and probably how to setup each one of them. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991108#comment-16991108 ] hadoop_hdfs_hw edited comment on HDFS-15025 at 12/9/19 3:42 AM: thanks for your attention, we will implement the feature later, and upload the codes was (Author: wangyayun): thanks for your attentions, we will implement the feature later, and upload the codes > Applying NVDIMM storage media to HDFS > - > > Key: HDFS-15025 > URL: https://issues.apache.org/jira/browse/HDFS-15025 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Reporter: hadoop_hdfs_hw >Priority: Major > Attachments: Applying NVDIMM to HDFS.pdf > > > The non-volatile memory NVDIMM is faster than SSD, it can be used > simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on > NVDIMM can not only improves the response rate of HDFS, but also ensure the > reliability of the data. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16991108#comment-16991108 ] hadoop_hdfs_hw commented on HDFS-15025: --- thanks for your attentions, we will implement the feature later, and upload the codes > Applying NVDIMM storage media to HDFS > - > > Key: HDFS-15025 > URL: https://issues.apache.org/jira/browse/HDFS-15025 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Reporter: hadoop_hdfs_hw >Priority: Major > Attachments: Applying NVDIMM to HDFS.pdf > > > The non-volatile memory NVDIMM is faster than SSD, it can be used > simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on > NVDIMM can not only improves the response rate of HDFS, but also ensure the > reliability of the data. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15039) Cache meta file length of FinalizedReplica to reduce call File.length()
[ https://issues.apache.org/jira/browse/HDFS-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15039: Attachment: HDFS-15039.patch Status: Patch Available (was: Open) > Cache meta file length of FinalizedReplica to reduce call File.length() > --- > > Key: HDFS-15039 > URL: https://issues.apache.org/jira/browse/HDFS-15039 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15039.patch > > > When use ReplicaCachingGetSpaceUsed to get the volume space used. It will > call File.length() for every meta file of replica. That add more disk IO, we > found the slow log as below. For finalized replica, the size of meta file is > not changed, i think we can cache the value. > {code:java} > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed: > Refresh dfs used, bpid: BP-898717543-10.75.1.240-1519386995727 replicas > size: 1166 dfsUsed: 72227113183 on volume: > DS-3add8d62-d69a-4f5a-a29f-b7bbb400af2e duration: 17206ms{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15039) Cache meta file length of FinalizedReplica to reduce call File.length()
Yang Yun created HDFS-15039: --- Summary: Cache meta file length of FinalizedReplica to reduce call File.length() Key: HDFS-15039 URL: https://issues.apache.org/jira/browse/HDFS-15039 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Reporter: Yang Yun Assignee: Yang Yun When use ReplicaCachingGetSpaceUsed to get the volume space used. It will call File.length() for every meta file of replica. That add more disk IO, we found the slow log as below. For finalized replica, the size of meta file is not changed, i think we can cache the value. {code:java} org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed: Refresh dfs used, bpid: BP-898717543-10.75.1.240-1519386995727 replicas size: 1166 dfsUsed: 72227113183 on volume: DS-3add8d62-d69a-4f5a-a29f-b7bbb400af2e duration: 17206ms{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org