[jira] [Commented] (HDFS-15478) When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.
[ https://issues.apache.org/jira/browse/HDFS-15478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162457#comment-17162457 ] Hadoop QA commented on HDFS-15478: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 1s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 55s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 55s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 38s{color} | {color:red} hadoop-common in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 43s{color} | {color:red} hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 15s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 12s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 4 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 53s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 10s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 39s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} |
[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162452#comment-17162452 ] huangtianhua commented on HDFS-15025: - [~liuml07], thanks, and the failed tests are not related with this feature :) > Applying NVDIMM storage media to HDFS > - > > Key: HDFS-15025 > URL: https://issues.apache.org/jira/browse/HDFS-15025 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Reporter: hadoop_hdfs_hw >Priority: Major > Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, > HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, > HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch > > > The non-volatile memory NVDIMM is faster than SSD, it can be used > simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on > NVDIMM can not only improves the response rate of HDFS, but also ensure the > reliability of the data. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162285#comment-17162285 ] Mingliang Liu commented on HDFS-15025: -- Good feature! Are the failing tests related? I plan to review later this month if it is not already merged. Thanks, > Applying NVDIMM storage media to HDFS > - > > Key: HDFS-15025 > URL: https://issues.apache.org/jira/browse/HDFS-15025 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Reporter: hadoop_hdfs_hw >Priority: Major > Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, > HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, > HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch > > > The non-volatile memory NVDIMM is faster than SSD, it can be used > simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on > NVDIMM can not only improves the response rate of HDFS, but also ensure the > reliability of the data. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12200) Optimize CachedDNSToSwitchMapping to avoid 100% cpu utilization
[ https://issues.apache.org/jira/browse/HDFS-12200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162200#comment-17162200 ] Hadoop QA commented on HDFS-12200: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 14s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 16s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 23s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 4 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 14m 0s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 39s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 55s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 5s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}235m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestHdfsConfigFields | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | |
[jira] [Commented] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162155#comment-17162155 ] Hudson commented on HDFS-15246: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18460 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18460/]) HDFS-15246. ArrayIndexOfboundsException in BlockManager (inigoiri: rev 8b7695bb2628574b4450bac19c12b29db9ee0628) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java > ArrayIndexOfboundsException in BlockManager CreateLocatedBlock > -- > > Key: HDFS-15246 > URL: https://issues.apache.org/jira/browse/HDFS-15246 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, > HDFS-15246.002.patch, HDFS-15246.003.patch > > > java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162134#comment-17162134 ] Íñigo Goiri commented on HDFS-15246: Thanks [~hemanthboyina] for the contribution. Committed to trunk. > ArrayIndexOfboundsException in BlockManager CreateLocatedBlock > -- > > Key: HDFS-15246 > URL: https://issues.apache.org/jira/browse/HDFS-15246 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, > HDFS-15246.002.patch, HDFS-15246.003.patch > > > java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-15246: --- Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > ArrayIndexOfboundsException in BlockManager CreateLocatedBlock > -- > > Key: HDFS-15246 > URL: https://issues.apache.org/jira/browse/HDFS-15246 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, > HDFS-15246.002.patch, HDFS-15246.003.patch > > > java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-15246: --- Description: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) was: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) > ArrayIndexOfboundsException in BlockManager CreateLocatedBlock > -- > > Key: HDFS-15246 > URL: https://issues.apache.org/jira/browse/HDFS-15246 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, > HDFS-15246.002.patch, HDFS-15246.003.patch > > > java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15246) ArrayIndexOfboundsException in BlockManager CreateLocatedBlock
[ https://issues.apache.org/jira/browse/HDFS-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162128#comment-17162128 ] Íñigo Goiri commented on HDFS-15246: +1 on [^HDFS-15246.003.patch]. Committing soon. > ArrayIndexOfboundsException in BlockManager CreateLocatedBlock > -- > > Key: HDFS-15246 > URL: https://issues.apache.org/jira/browse/HDFS-15246 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HDFS-15246-testrepro.patch, HDFS-15246.001.patch, > HDFS-15246.002.patch, HDFS-15246.003.patch > > > java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 > > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:1362) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1501) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:179) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2047) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:770) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15483) Ordered snapshot deletion: Disallow rename between two snapshottable directories
[ https://issues.apache.org/jira/browse/HDFS-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15483: -- Assignee: Shashikant Banerjee > Ordered snapshot deletion: Disallow rename between two snapshottable > directories > > > Key: HDFS-15483 > URL: https://issues.apache.org/jira/browse/HDFS-15483 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > > With the ordered snapshot deletion feature, only the *earliest* snapshot can > be actually deleted from the file system. If renaming between snapshottable > directories is allowed, only the earliest snapshot among all the > snapshottable directories can be actually deleted. In such case, individual > snapshottable directory may not be able to free up the resources by itself. > Therefore, we propose disallowing renaming between snapshottable directories > in this JIRA. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15482) Ordered snapshot deletion: hide the deleted snapshots from users
[ https://issues.apache.org/jira/browse/HDFS-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee reassigned HDFS-15482: -- Assignee: Shashikant Banerjee > Ordered snapshot deletion: hide the deleted snapshots from users > > > Key: HDFS-15482 > URL: https://issues.apache.org/jira/browse/HDFS-15482 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > > In HDFS-15480, the behavior of deleting the non-earliest snapshots is > changed to marking them as deleted in XAttr but not actually deleting them. > The users are still able to access the these snapshots as usual. > In this JIRA, the marked-for-deletion snapshots are hided so that they become > inaccessible > to users. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15484) Add option in enum Rename to suport batch rename
[ https://issues.apache.org/jira/browse/HDFS-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162050#comment-17162050 ] Hadoop QA commented on HDFS-15484: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 35m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} prototool {color} | {color:blue} 0m 0s{color} | {color:blue} prototool was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 24m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 8s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 44s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 50s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 extant findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 4 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 20m 2s{color} | {color:red} root generated 23 new + 139 unchanged - 23 fixed = 162 total (was 162) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 16s{color} | {color:orange} root: The patch generated 7 new + 352 unchanged - 1 fixed = 359 total (was 353) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 43s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 16s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 18s{color}
[jira] [Commented] (HDFS-15472) Erasure Coding: Support fallback read when zero copy is not supported
[ https://issues.apache.org/jira/browse/HDFS-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162045#comment-17162045 ] dzcxzl commented on HDFS-15472: --- I moved the logic of fallback read to DFSStripedInputStream and added a unit test to it. > Erasure Coding: Support fallback read when zero copy is not supported > - > > Key: HDFS-15472 > URL: https://issues.apache.org/jira/browse/HDFS-15472 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: dzcxzl >Priority: Trivial > Attachments: HDFS-15472.000.patch, HDFS-15472.001.patch > > > [HDFS-8203|https://issues.apache.org/jira/browse/HDFS-8203] > ec does not support zeor copy read, but currently does not support fallback > read, it will throw an exception. > {code:java} > Caused by: java.lang.UnsupportedOperationException: Not support enhanced byte > buffer access. > at > org.apache.hadoop.hdfs.DFSStripedInputStream.read(DFSStripedInputStream.java:524) > at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:188) > at > org.apache.hadoop.hive.shims.ZeroCopyShims$ZeroCopyAdapter.readBuffer(ZeroCopyShims.java:79) > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15472) Erasure Coding: Support fallback read when zero copy is not supported
[ https://issues.apache.org/jira/browse/HDFS-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dzcxzl updated HDFS-15472: -- Attachment: HDFS-15472.001.patch > Erasure Coding: Support fallback read when zero copy is not supported > - > > Key: HDFS-15472 > URL: https://issues.apache.org/jira/browse/HDFS-15472 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: dzcxzl >Priority: Trivial > Attachments: HDFS-15472.000.patch, HDFS-15472.001.patch > > > [HDFS-8203|https://issues.apache.org/jira/browse/HDFS-8203] > ec does not support zeor copy read, but currently does not support fallback > read, it will throw an exception. > {code:java} > Caused by: java.lang.UnsupportedOperationException: Not support enhanced byte > buffer access. > at > org.apache.hadoop.hdfs.DFSStripedInputStream.read(DFSStripedInputStream.java:524) > at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:188) > at > org.apache.hadoop.hive.shims.ZeroCopyShims$ZeroCopyAdapter.readBuffer(ZeroCopyShims.java:79) > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162043#comment-17162043 ] Hadoop QA commented on HDFS-15480: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 24m 59s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 24s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 22s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 4 extant findbugs warnings. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 87 unchanged - 0 fixed = 91 total (was 87) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 1s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}220m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29537/artifact/out/Dockerfile | | JIRA Issue | HDFS-15480 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13008070/HDFS-15480.001.patch | | Optional Tests | dupname
[jira] [Updated] (HDFS-15313) Ensure inodes in active filesystem are not deleted during snapshot delete
[ https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HDFS-15313: -- Summary: Ensure inodes in active filesystem are not deleted during snapshot delete (was: Ensure inodes in active filesytem are not deleted during snapshot delete) > Ensure inodes in active filesystem are not deleted during snapshot delete > - > > Key: HDFS-15313 > URL: https://issues.apache.org/jira/browse/HDFS-15313 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5 > > Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, > HDFS-15313.001.patch, HDFS-15313.branch-2.10.001.patch, > HDFS-15313.branch-2.10.patch, HDFS-15313.branch-2.8.patch > > > After HDFS-13101, it was observed in one of our customer deployments that > delete snapshot ends up cleaning up inodes from active fs which can be > referred from only one snapshot as the isLastReference() check for the parent > dir introduced in HDFS-13101 may return true in certain cases. The aim of > this Jira to add a check to ensure if the Inodes are being referred in the > active fs , should not get deleted while deletion of snapshot happens. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161951#comment-17161951 ] Shashikant Banerjee commented on HDFS-15480: Thanks [~umamaheswararao]/[~msingh] for the review. The review comments are addressed here: [https://github.com/apache/hadoop/pull/2163] > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161935#comment-17161935 ] Uma Maheswara Rao G commented on HDFS-15480: {quote} String SNAPSHOT_XATTR_NAME = "snapshot"; {quote} Xattr name must be prefixed with the name space type. Here you will use that for system only. Please prefix with system. For consistency, you may want to name it as "system.hdfs.snapshot" or "system.hdfs.snapshots.to.delete". ? > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15487) ScriptBasedMapping lead 100% cpu utilization
Ryan Wu created HDFS-15487: -- Summary: ScriptBasedMapping lead 100% cpu utilization Key: HDFS-15487 URL: https://issues.apache.org/jira/browse/HDFS-15487 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ryan Wu We found that sometimes NameNode cpu utilization rate of 90% leading to NameNode hang up. The reason is that flink apps on k8s access HDFS at the same time, however their ip and host name is not fixed. So that run topology script at the same time. From jstack file, also found it started several hundreds python processes. {code:java} // "process reaper" #36159 daemon prio=10 os_prio=0 tid=0x7fa7a33fa7a0 nid=0xa3cb waiting on condition [0x7fa7a61dc000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x7fb4094a0398> (a java.util.concurrent.SynchronousQueue$TransferStack) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161927#comment-17161927 ] Mukul Kumar Singh commented on HDFS-15480: -- Thanks for the patch [~shashikant]. Some comments as following. a) FSDirSnapshotOp:280, Lets use the FSDirXAttrOp.unprotectedSetXAttrs api and lets assert that the Snapshot Xattr is only set on Snapshot root. b) TestOrderedSnapshotDeletion:105, lets also assert that the value should be null/non-existent c) TestOrderedSnapshotDeletion:152, can use the same name creation api as in FSDirSnapshotOp. This will keep the name in test consistent. > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161910#comment-17161910 ] Shashikant Banerjee commented on HDFS-15480: Thanks [~szetszwo] for the review comments. Patch v1 addresses the comments along with [~msingh] comments here: [https://github.com/apache/hadoop/pull/2156] > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15480: --- Status: Patch Available (was: Open) > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15480) Ordered snapshot deletion: record snapshot deletion in XAttr
[ https://issues.apache.org/jira/browse/HDFS-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15480: --- Attachment: HDFS-15480.001.patch > Ordered snapshot deletion: record snapshot deletion in XAttr > > > Key: HDFS-15480 > URL: https://issues.apache.org/jira/browse/HDFS-15480 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15480.000.patch, HDFS-15480.001.patch > > > In this JIRA, the behavior of deleting the non-earliest snapshots will be > changed to marking them as deleted in XAttr but not actually deleting them. > Note that > # The marked-for-deletion snapshots will be garbage collected later on; see > HDFS-15481. > # The marked-for-deletion snapshots will be hided from users; see HDFS-15482. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15486) Costly sendResponse operation slows down async editlog handling
[ https://issues.apache.org/jira/browse/HDFS-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-15486: - Description: When our cluster NameNode in a very high load, we find it often stuck in Async-editlog handling. We use async-profile tool to get the flamegraph. !Async-profile-(2).jpg! This happened in that async editlog thread consumes Edit from the queue and triggers the sendResponse call. But here the sendResponse call is a little expensive since our cluster enabled the security env and will do some encode operations when doing the return response operation. We often catch some moments of costly sendResponse operation when rpc call queue is fulled. !async-profile-(1).jpg! Slowness on consuming Edit in async editlog will make Edit pending Queue easily become the fulled state, then block its enqueue operation that is invoked in writeLock type methods in FSNamesystem class. Here the enhancement is that we can use multiple thread to parallel execute sendResponse call. sendResponse doesn't need use the write lock to do protection, so this change is safe. was: When our cluster NameNode in a very high load, we find it often stuck in Async-editlog handling. We use async-profile tool to get the flamegraph. !Async-profile-(2).jpg! This happened in that async editlog thread consumes Edit from the queue and triggers the sendResponse call. But here the sendResponse call is a little expensive since our cluster enabled the security env and will do some encode operations when doing the return response operation. We often catch some moments of costly sendResponse operation when rpc call queue is fulled. !async-profile-(1).jpg! Slowness on consuming Edit in async editlog will make Edit pending Queue in the fulled state, then block its enqueue operation that is invoked in writeLock type methods in FSNamesystem class. Here the enhancement is that we can use multiple thread to parallel execute sendResponse call. sendResponse doesn't need use the write lock to do protection, so this change is safe. > Costly sendResponse operation slows down async editlog handling > --- > > Key: HDFS-15486 > URL: https://issues.apache.org/jira/browse/HDFS-15486 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yiqun Lin >Priority: Major > Attachments: Async-profile-(2).jpg, async-profile-(1).jpg > > > When our cluster NameNode in a very high load, we find it often stuck in > Async-editlog handling. > We use async-profile tool to get the flamegraph. > !Async-profile-(2).jpg! > This happened in that async editlog thread consumes Edit from the queue and > triggers the sendResponse call. > But here the sendResponse call is a little expensive since our cluster > enabled the security env and will do some encode operations when doing the > return response operation. > We often catch some moments of costly sendResponse operation when rpc call > queue is fulled. > !async-profile-(1).jpg! > Slowness on consuming Edit in async editlog will make Edit pending Queue > easily become the fulled state, then block its enqueue operation that is > invoked in writeLock type methods in FSNamesystem class. > Here the enhancement is that we can use multiple thread to parallel execute > sendResponse call. sendResponse doesn't need use the write lock to do > protection, so this change is safe. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15486) Costly sendResponse operation slows down async editlog handling
Yiqun Lin created HDFS-15486: Summary: Costly sendResponse operation slows down async editlog handling Key: HDFS-15486 URL: https://issues.apache.org/jira/browse/HDFS-15486 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Yiqun Lin Attachments: Async-profile-(2).jpg, async-profile-(1).jpg When our cluster NameNode in a very high load, we find it often stuck in Async-editlog handling. We use async-profile tool to get the flamegraph. !Async-profile-(2).jpg! This happened in that async editlog thread consumes Edit from the queue and triggers the sendResponse call. But here the sendResponse call is a little expensive since our cluster enabled the security env and will do some encode operations when doing the return response operation. We often catch some moments of costly sendResponse operation when rpc call queue is fulled. !async-profile-(1).jpg! Slowness on consuming Edit in async editlog will make Edit pending Queue in the fulled state, then block its enqueue operation that is invoked in writeLock type methods in FSNamesystem class. Here the enhancement is that we can use multiple thread to parallel execute sendResponse call. sendResponse doesn't need use the write lock to do protection, so this change is safe. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15485) Fix outdated properties of JournalNode when performing rollback
Deegue created HDFS-15485: - Summary: Fix outdated properties of JournalNode when performing rollback Key: HDFS-15485 URL: https://issues.apache.org/jira/browse/HDFS-15485 Project: Hadoop HDFS Issue Type: Bug Reporter: Deegue When rollback HDFS cluster, properties in JNStorage won't be refreshed after the storage dir changed. It leads to exceptions when starting namenode. The exception like: {code:java} 2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, 10.0.118.179:8485], stream=null)) org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown: 10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory /mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but storage has nsId 0 at org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236) at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300) at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136) at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133) at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15482) Ordered snapshot deletion: hide the deleted snapshots from users
[ https://issues.apache.org/jira/browse/HDFS-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161778#comment-17161778 ] Hemanth Boyina commented on HDFS-15482: --- hi [~szetszwo] , is this to hide the snapshots from list, get and renaming ? so user couldn't access them in any of the following operations ? > Ordered snapshot deletion: hide the deleted snapshots from users > > > Key: HDFS-15482 > URL: https://issues.apache.org/jira/browse/HDFS-15482 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Priority: Major > > In HDFS-15480, the behavior of deleting the non-earliest snapshots is > changed to marking them as deleted in XAttr but not actually deleting them. > The users are still able to access the these snapshots as usual. > In this JIRA, the marked-for-deletion snapshots are hided so that they become > inaccessible > to users. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15484) Add option in enum Rename to suport batch rename
[ https://issues.apache.org/jira/browse/HDFS-15484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yun updated HDFS-15484: Attachment: HDFS-15484.001.patch Status: Patch Available (was: Open) > Add option in enum Rename to suport batch rename > > > Key: HDFS-15484 > URL: https://issues.apache.org/jira/browse/HDFS-15484 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient, namenode, performance >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15484.001.patch > > > Sometime we need rename many files after a task, add a new option in enum > Rename to support batch rename, which only need one RPC and one lock. For > example, > rename(new Path("/dir1/f1::/dir2/f2"), new Path("/dir3/f1::dir4/f4"), > Rename.BATCH) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15484) Add option in enum Rename to suport batch rename
Yang Yun created HDFS-15484: --- Summary: Add option in enum Rename to suport batch rename Key: HDFS-15484 URL: https://issues.apache.org/jira/browse/HDFS-15484 Project: Hadoop HDFS Issue Type: Improvement Components: dfsclient, namenode, performance Reporter: Yang Yun Assignee: Yang Yun Sometime we need rename many files after a task, add a new option in enum Rename to support batch rename, which only need one RPC and one lock. For example, rename(new Path("/dir1/f1::/dir2/f2"), new Path("/dir3/f1::dir4/f4"), Rename.BATCH) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature
[ https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161752#comment-17161752 ] Hudson commented on HDFS-15479: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18459 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18459/]) HDFS-15479. Ordered snapshot deletion: make it a configurable feature (github: rev d57462f2daee5f057e32219d4123a3f75506d6d4) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java > Ordered snapshot deletion: make it a configurable feature > - > > Key: HDFS-15479 > URL: https://issues.apache.org/jira/browse/HDFS-15479 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > Attachments: h15479_20200719.patch > > > Ordered snapshot deletion is a configurable feature. In this JIRA, a conf is > added. > When the feature is enabled, only the earliest snapshot can be deleted. For > deleting the non-earliest snapshots, the behavior is temporarily changed to > throwing an exception in this JIRA. In HDFS-15480, the behavior of deleting > the non-earliest snapshots will be changed to marking them as deleted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15479) Ordered snapshot deletion: make it a configurable feature
[ https://issues.apache.org/jira/browse/HDFS-15479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15479. Fix Version/s: 3.4.0 Resolution: Fixed > Ordered snapshot deletion: make it a configurable feature > - > > Key: HDFS-15479 > URL: https://issues.apache.org/jira/browse/HDFS-15479 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > Attachments: h15479_20200719.patch > > > Ordered snapshot deletion is a configurable feature. In this JIRA, a conf is > added. > When the feature is enabled, only the earliest snapshot can be deleted. For > deleting the non-earliest snapshots, the behavior is temporarily changed to > throwing an exception in this JIRA. In HDFS-15480, the behavior of deleting > the non-earliest snapshots will be changed to marking them as deleted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15470) Added more unit tests to validate rename behaviour across snapshots
[ https://issues.apache.org/jira/browse/HDFS-15470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161743#comment-17161743 ] Hudson commented on HDFS-15470: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18458 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18458/]) HDFS-15470. Added more unit tests to validate rename behaviour across (shashikant: rev d9441f95c362214e249b969c9ccc3fb4e8c1709a) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java > Added more unit tests to validate rename behaviour across snapshots > --- > > Key: HDFS-15470 > URL: https://issues.apache.org/jira/browse/HDFS-15470 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.0.4 > > Attachments: HDFS-15470.000.patch, HDFS-15470.001.patch, > HDFS-15470.002.patch > > > HDFS-15313 fixes a critical issue which will avoid deletion of data in active > fs with a sequence of snapshot deletes. The idea is to add more tests to > verify the behaviour. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org