[jira] [Resolved] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee resolved HDFS-15490. Resolution: Won't Do > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15490.000.patch > > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15490) Address checkstyle issues reported with HDFS-15480
[ https://issues.apache.org/jira/browse/HDFS-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15490: --- Status: Open (was: Patch Available) > Address checkstyle issues reported with HDFS-15480 > -- > > Key: HDFS-15490 > URL: https://issues.apache.org/jira/browse/HDFS-15490 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-15490.000.patch > > > {code:java} > ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java:50:public > class FSDirXAttrOp {:1: Utility classes should not have a public or default > constructor. [HideUtilityClassConstructor] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:49: > static final String xattrName = "user.a1";:23: Name 'xattrName' must match > pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > ./hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java:50: > static final byte[] xattrValue = {0x31, 0x32, 0x33};:23: Name 'xattrValue' > must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15516: --- Reporter: Jin Adachi (was: Shashikant Banerjee) > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Jin Adachi >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15516: --- Reporter: jianghua zhu (was: Jin Adachi) > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: jianghua zhu >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs
[ https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184930#comment-17184930 ] Shashikant Banerjee commented on HDFS-15516: [~jianghuazhu], please go ahead. > Add info for create flags in NameNode audit logs > > > Key: HDFS-15516 > URL: https://issues.apache.org/jira/browse/HDFS-15516 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > > Currently, if file create happens with flags like overwrite , the audit logs > doesn't seem to contain the info regarding the flags in the audit logs. It > would be useful to add info regarding the create options in the audit logs > similar to Rename ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14997) BPServiceActor processes commands from NameNode asynchronously
[ https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184920#comment-17184920 ] Xiaoqiao He commented on HDFS-14997: Thanks [~Captainhzy], In my experience, process cmds async can resolve most cases about DataNode lost because no heartbeat for long times from NameNode views. In general, we have split cmds to process one by one asyc rather than process all of them in main flow. So I believe it improved significantly. About the lock contention, I agree that it could be blocked especially process one very heavy cmd(maybe some corner case). Any ideas to improve it? welcome to more discuss. Thanks again. > BPServiceActor processes commands from NameNode asynchronously > -- > > Key: HDFS-14997 > URL: https://issues.apache.org/jira/browse/HDFS-14997 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14997.001.patch, HDFS-14997.002.patch, > HDFS-14997.003.patch, HDFS-14997.004.patch, HDFS-14997.005.patch, > HDFS-14997.addendum.patch, image-2019-12-26-16-15-44-814.png > > > There are two core functions, report(#sendHeartbeat, #blockReport, > #cacheReport) and #processCommand in #BPServiceActor main process flow. If > processCommand cost long time it will block send report flow. Meanwhile > processCommand could cost long time(over 1000s the worst case I meet) when IO > load of DataNode is very high. Since some IO operations are under > #datasetLock, So it has to wait to acquire #datasetLock long time when > process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat > will not send to NameNode in-time, and trigger other disasters. > I propose to improve #processCommand asynchronously and not block > #BPServiceActor to send heartbeat back to NameNode when meet high IO load. > Notes: > 1. Lifeline could be one effective solution, however some old branches are > not support this feature. > 2. IO operations under #datasetLock is another issue, I think we should solve > it at another JIRA. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota
[ https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184890#comment-17184890 ] Ayush Saxena commented on HDFS-8631: Test failures seems not related, mostly due to {{unable to create native thread}} +1 > WebHDFS : Support setQuota > -- > > Key: HDFS-8631 > URL: https://issues.apache.org/jira/browse/HDFS-8631 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.2 >Reporter: nijel >Assignee: Chao Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, > HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, > HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, > HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch, > HDFS-8631-branch-3.2.001.patch > > > User is able do quota management from filesystem object. Same operation can > be allowed trough REST API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14997) BPServiceActor processes commands from NameNode asynchronously
[ https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184873#comment-17184873 ] zy.jordan commented on HDFS-14997: -- Thanks for [~hexiaoqiao]'s answer. Due to `updateActorStatesFromHeartbeat` function in the `BPServiceActor.offerService` need to hold write lock. And `processCommandFromActive` function in the `CommandProcessingThread` also need to hold the same write lock. So when 'CommandProcessingThread` work, it also block the `BPServiceActor`, and block the heartbeat. This is my view, it might not be right. If so, please correct it, and thanks a lot. > BPServiceActor processes commands from NameNode asynchronously > -- > > Key: HDFS-14997 > URL: https://issues.apache.org/jira/browse/HDFS-14997 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-14997.001.patch, HDFS-14997.002.patch, > HDFS-14997.003.patch, HDFS-14997.004.patch, HDFS-14997.005.patch, > HDFS-14997.addendum.patch, image-2019-12-26-16-15-44-814.png > > > There are two core functions, report(#sendHeartbeat, #blockReport, > #cacheReport) and #processCommand in #BPServiceActor main process flow. If > processCommand cost long time it will block send report flow. Meanwhile > processCommand could cost long time(over 1000s the worst case I meet) when IO > load of DataNode is very high. Since some IO operations are under > #datasetLock, So it has to wait to acquire #datasetLock long time when > process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat > will not send to NameNode in-time, and trigger other disasters. > I propose to improve #processCommand asynchronously and not block > #BPServiceActor to send heartbeat back to NameNode when meet high IO load. > Notes: > 1. Lifeline could be one effective solution, however some old branches are > not support this feature. > 2. IO operations under #datasetLock is another issue, I think we should solve > it at another JIRA. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HDFS-14694: --- Attachment: HDFS-14694.006.patch > Call recoverLease on DFSOutputStream close exception > > > Key: HDFS-14694 > URL: https://issues.apache.org/jira/browse/HDFS-14694 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chen Zhang >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, > HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch, > HDFS-14694.006.patch > > > HDFS uses file-lease to manage opened files, when a file is not closed > normally, NN will recover lease automatically after hard limit exceeded. But > for a long running service(e.g. HBase), the hdfs-client will never die and NN > don't have any chances to recover the file. > Usually client program needs to handle exceptions by themself to avoid this > condition(e.g. HBase automatically call recover lease for files that not > closed normally), but in our experience, most services (in our company) don't > process this condition properly, which will cause lots of files in abnormal > status or even data loss. > This Jira propose to add a feature that call recoverLease operation > automatically when DFSOutputSteam close encounters exception. It should be > disabled by default, but when somebody builds a long-running service based on > HDFS, they can enable this option. > We've add this feature to our internal Hadoop distribution for more than 3 > years, it's quite useful according our experience. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota
[ https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184781#comment-17184781 ] Hadoop QA commented on HDFS-8631: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} branch-3.2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 7s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 48s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 30s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 14s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 45s{color} | {color:green} branch-3.2 passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 8s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 18s{color} | {color:green} branch-3.2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s{color} | {color:green} root: The patch generated 0 new + 557 unchanged - 1 fixed = 557 total (was 558) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 1s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 51s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 44s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 44s{color} | {color:blue} ASF License check generated no output? {color} | | {color:black}{color} | {color:black} {color} | {color:black}247m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests |
[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations
[ https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184757#comment-17184757 ] Hadoop QA commented on HDFS-15510: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 11s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 3s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} |
[jira] [Commented] (HDFS-15536) RBF: Clear Quota in Router was not consistent
[ https://issues.apache.org/jira/browse/HDFS-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184712#comment-17184712 ] Íñigo Goiri commented on HDFS-15536: The failed test looks unrelated. +1 on [^HDFS-15536.003.patch]. > RBF: Clear Quota in Router was not consistent > --- > > Key: HDFS-15536 > URL: https://issues.apache.org/jira/browse/HDFS-15536 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: HDFS-15536.001.patch, HDFS-15536.002.patch, > HDFS-15536.003.patch, HDFS-15536.testrepro.patch > > > *)create a mount point > *) set quota for mount point through dfsrouteradmin > *) clear quota for the same mount point through dfsrouteradmin > check the content summary of mount point , quota was not cleared , though the > mount table store has the quota cleared -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15515) mkdirs on fallback should throw IOE out instead of suppressing and returning false
[ https://issues.apache.org/jira/browse/HDFS-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-15515: --- Fix Version/s: 3.1.5 3.2.2 > mkdirs on fallback should throw IOE out instead of suppressing and returning > false > -- > > Key: HDFS-15515 > URL: https://issues.apache.org/jira/browse/HDFS-15515 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Fix For: 3.2.2, 3.3.1, 3.1.5 > > > Currently when doing mkdirs on fallback dir, we catching IOE and returning > false. > I think we should just throw IOE out as the fs#mkdirs throws IOE out. > I noticed a case when we attempt to create .reserved dirs, NN throws > HadoopIAE. > But we will catch and return false. Here exception should be thrown out. > {code:java} > try { > return linkedFallbackFs.mkdirs(dirToCreate, permission); > } catch (IOException e) { > if (LOG.isDebugEnabled()) { > StringBuilder msg = > new StringBuilder("Failed to create ").append(dirToCreate) > .append(" at fallback : ") > .append(linkedFallbackFs.getUri()); > LOG.debug(msg.toString(), e); > } > return false; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15533) Provide DFS API compatible class(ViewDistributedFileSystem), but use ViewFileSystemOverloadScheme inside
[ https://issues.apache.org/jira/browse/HDFS-15533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uma Maheswara Rao G updated HDFS-15533: --- Fix Version/s: 3.3.1 > Provide DFS API compatible class(ViewDistributedFileSystem), but use > ViewFileSystemOverloadScheme inside > > > Key: HDFS-15533 > URL: https://issues.apache.org/jira/browse/HDFS-15533 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: dfs, viewfs >Affects Versions: 3.4.0 >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Fix For: 3.3.1, 3.4.0 > > > I have been working on a thought from last week is that, we wanted to provide > DFS compatible APIs with mount functionality. So, that existing DFS > applications can work with out class cast issues. > When we tested with other components like Hive and HBase, I noticed some > classcast issues. > {code:java} > HBase example: > java.lang.ClassCastException: > org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme cannot be cast to > org.apache.hadoop.hdfs.DistributedFileSystemjava.lang.ClassCastException: > org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme cannot be cast to > org.apache.hadoop.hdfs.DistributedFileSystem at > org.apache.hadoop.hbase.util.FSUtils.getDFSHedgedReadMetrics(FSUtils.java:1748) > at > org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl.(MetricsRegionServerWrapperImpl.java:146) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1594) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1001) > at java.lang.Thread.run(Thread.java:748){code} > {code:java} > Hive: > |io.AcidUtils|: Failed to get files with ID; using regular API: Only > supported for DFS; got class > org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme{code} > SO, here the implementation details are like follows: > We extended DistributedFileSystem and created a class called " > ViewDistributedFileSystem" > This vfs=ViewFirstibutedFileSystem, try to initialize > ViewFileSystemOverloadScheme. If success call will delegate to vfs. If fails > to initialize due to no mount points, or other errors, it will just fallback > to regular DFS init. If users does not configure any mount, system will > behave exactly like today's DFS. If there are mount points, vfs functionality > will come under DFS. > I have a patch and will post it in some time. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15515) mkdirs on fallback should throw IOE out instead of suppressing and returning false
[ https://issues.apache.org/jira/browse/HDFS-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184692#comment-17184692 ] Uma Maheswara Rao G commented on HDFS-15515: Thank you [~ste...@apache.org] :) > mkdirs on fallback should throw IOE out instead of suppressing and returning > false > -- > > Key: HDFS-15515 > URL: https://issues.apache.org/jira/browse/HDFS-15515 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Uma Maheswara Rao G >Assignee: Uma Maheswara Rao G >Priority: Major > Fix For: 3.3.1 > > > Currently when doing mkdirs on fallback dir, we catching IOE and returning > false. > I think we should just throw IOE out as the fs#mkdirs throws IOE out. > I noticed a case when we attempt to create .reserved dirs, NN throws > HadoopIAE. > But we will catch and return false. Here exception should be thrown out. > {code:java} > try { > return linkedFallbackFs.mkdirs(dirToCreate, permission); > } catch (IOException e) { > if (LOG.isDebugEnabled()) { > StringBuilder msg = > new StringBuilder("Failed to create ").append(dirToCreate) > .append(" at fallback : ") > .append(linkedFallbackFs.getUri()); > LOG.debug(msg.toString(), e); > } > return false; > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15536) RBF: Clear Quota in Router was not consistent
[ https://issues.apache.org/jira/browse/HDFS-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184681#comment-17184681 ] Hadoop QA commented on HDFS-15536: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 1m 10s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 33s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} |
[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota
[ https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184678#comment-17184678 ] Uma Maheswara Rao G commented on HDFS-8631: --- [~ayushtkn], yes patch is same. Thanks for re-opening it. Let's see the Jenkins results. > WebHDFS : Support setQuota > -- > > Key: HDFS-8631 > URL: https://issues.apache.org/jira/browse/HDFS-8631 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.2 >Reporter: nijel >Assignee: Chao Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, > HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, > HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, > HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch, > HDFS-8631-branch-3.2.001.patch > > > User is able do quota management from filesystem object. Same operation can > be allowed trough REST API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184631#comment-17184631 ] Hadoop QA commented on HDFS-14694: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 20s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 52s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 58s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 18s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 28s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 58s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
[jira] [Reopened] (HDFS-8631) WebHDFS : Support setQuota
[ https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reopened HDFS-8631: > WebHDFS : Support setQuota > -- > > Key: HDFS-8631 > URL: https://issues.apache.org/jira/browse/HDFS-8631 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.2 >Reporter: nijel >Assignee: Chao Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, > HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, > HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, > HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch, > HDFS-8631-branch-3.2.001.patch > > > User is able do quota management from filesystem object. Same operation can > be allowed trough REST API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota
[ https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184615#comment-17184615 ] Ayush Saxena commented on HDFS-8631: Have reopened this, to get the jenkins result, It doesn't pick non patch-Available state patches. Patch seems same as v011, should be good to go, if jenkins doesn't have any complains > WebHDFS : Support setQuota > -- > > Key: HDFS-8631 > URL: https://issues.apache.org/jira/browse/HDFS-8631 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.2 >Reporter: nijel >Assignee: Chao Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, > HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, > HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, > HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch, > HDFS-8631-branch-3.2.001.patch > > > User is able do quota management from filesystem object. Same operation can > be allowed trough REST API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8631) WebHDFS : Support setQuota
[ https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-8631: --- Status: Patch Available (was: Reopened) > WebHDFS : Support setQuota > -- > > Key: HDFS-8631 > URL: https://issues.apache.org/jira/browse/HDFS-8631 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.2 >Reporter: nijel >Assignee: Chao Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, > HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, > HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, > HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch, > HDFS-8631-branch-3.2.001.patch > > > User is able do quota management from filesystem object. Same operation can > be allowed trough REST API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations
[ https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184603#comment-17184603 ] Hemanth Boyina commented on HDFS-15510: --- thanks for the review [~elgoiri] updated the patch , please review > RBF: Quota and Content Summary was not correct in Multiple Destinations > --- > > Key: HDFS-15510 > URL: https://issues.apache.org/jira/browse/HDFS-15510 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, > HDFS-15510.003.patch, HDFS-15510.004.patch > > > steps : > *) create a mount entry with multiple destinations ( for suppose 2) > *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content > Summary on the Mount Entry shows NS quota as 20 > *) Create 10 files through router, on creating 11th file , NS Quota Exceeded > Exception is coming > though the Content Summary showing the NS quota as 20 , we are not able to > create 20 files > > the problem here is router stores the mount entry's NS quota as 10 , but > invokes NS quota on both the name services by set NS quota as 10 , so content > summary on mount entry aggregates the content summary of both the name > services by making NS quota as 20 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations
[ https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HDFS-15510: -- Attachment: HDFS-15510.004.patch > RBF: Quota and Content Summary was not correct in Multiple Destinations > --- > > Key: HDFS-15510 > URL: https://issues.apache.org/jira/browse/HDFS-15510 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, > HDFS-15510.003.patch, HDFS-15510.004.patch > > > steps : > *) create a mount entry with multiple destinations ( for suppose 2) > *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content > Summary on the Mount Entry shows NS quota as 20 > *) Create 10 files through router, on creating 11th file , NS Quota Exceeded > Exception is coming > though the Content Summary showing the NS quota as 20 , we are not able to > create 20 files > > the problem here is router stores the mount entry's NS quota as 10 , but > invokes NS quota on both the name services by set NS quota as 10 , so content > summary on mount entry aggregates the content summary of both the name > services by making NS quota as 20 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8631) WebHDFS : Support setQuota
[ https://issues.apache.org/jira/browse/HDFS-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184199#comment-17184199 ] Uma Maheswara Rao G commented on HDFS-8631: --- [~surendrasingh] , [~ayushtkn] Could you please check above? Thanks > WebHDFS : Support setQuota > -- > > Key: HDFS-8631 > URL: https://issues.apache.org/jira/browse/HDFS-8631 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 2.7.2 >Reporter: nijel >Assignee: Chao Sun >Priority: Major > Fix For: 3.3.0 > > Attachments: HDFS-8631-001.patch, HDFS-8631-002.patch, > HDFS-8631-003.patch, HDFS-8631-004.patch, HDFS-8631-005.patch, > HDFS-8631-006.patch, HDFS-8631-007.patch, HDFS-8631-008.patch, > HDFS-8631-009.patch, HDFS-8631-010.patch, HDFS-8631-011.patch, > HDFS-8631-branch-3.2.001.patch > > > User is able do quota management from filesystem object. Same operation can > be allowed trough REST API. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations
[ https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184173#comment-17184173 ] Íñigo Goiri commented on HDFS-15510: * Why are we adding {{aggregateContentSummary()}}? * There is a checkstyle warning. > RBF: Quota and Content Summary was not correct in Multiple Destinations > --- > > Key: HDFS-15510 > URL: https://issues.apache.org/jira/browse/HDFS-15510 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, > HDFS-15510.003.patch > > > steps : > *) create a mount entry with multiple destinations ( for suppose 2) > *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content > Summary on the Mount Entry shows NS quota as 20 > *) Create 10 files through router, on creating 11th file , NS Quota Exceeded > Exception is coming > though the Content Summary showing the NS quota as 20 , we are not able to > create 20 files > > the problem here is router stores the mount entry's NS quota as 10 , but > invokes NS quota on both the name services by set NS quota as 10 , so content > summary on mount entry aggregates the content summary of both the name > services by making NS quota as 20 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations
[ https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184173#comment-17184173 ] Íñigo Goiri edited comment on HDFS-15510 at 8/25/20, 4:20 PM: -- * Why are we adding {{path}} to {{aggregateContentSummary()}}? * There is a checkstyle warning. was (Author: elgoiri): * Why are we adding {{aggregateContentSummary()}}? * There is a checkstyle warning. > RBF: Quota and Content Summary was not correct in Multiple Destinations > --- > > Key: HDFS-15510 > URL: https://issues.apache.org/jira/browse/HDFS-15510 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, > HDFS-15510.003.patch > > > steps : > *) create a mount entry with multiple destinations ( for suppose 2) > *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content > Summary on the Mount Entry shows NS quota as 20 > *) Create 10 files through router, on creating 11th file , NS Quota Exceeded > Exception is coming > though the Content Summary showing the NS quota as 20 , we are not able to > create 20 files > > the problem here is router stores the mount entry's NS quota as 10 , but > invokes NS quota on both the name services by set NS quota as 10 , so content > summary on mount entry aggregates the content summary of both the name > services by making NS quota as 20 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14852) Removing from LowRedundancyBlocks does not remove the block from all queues
[ https://issues.apache.org/jira/browse/HDFS-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-14852: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and the active 3.x branches. Thanks for the contribution [~ferhui]. > Removing from LowRedundancyBlocks does not remove the block from all queues > --- > > Key: HDFS-14852 > URL: https://issues.apache.org/jira/browse/HDFS-14852 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5 > > Attachments: CorruptBlocksMismatch.png, HDFS-14852.001.patch, > HDFS-14852.002.patch, HDFS-14852.003.patch, HDFS-14852.004.patch, > HDFS-14852.005.patch, HDFS-14852.006.patch, HDFS-14852.007.patch, > screenshot-1.png > > > LowRedundancyBlocks.java > {code:java} > // Some comments here > if(priLevel >= 0 && priLevel < LEVEL > && priorityQueues.get(priLevel).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block {}" > + " from priority queue {}", > block, priLevel); > decrementBlockStat(block, priLevel, oldExpectedReplicas); > return true; > } else { > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > for (int i = 0; i < LEVEL; i++) { > if (i != priLevel && priorityQueues.get(i).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block" + > " {} from priority queue {}", block, i); > decrementBlockStat(block, i, oldExpectedReplicas); > return true; > } > } > } > return false; > } > {code} > Source code is above, the comments as follow > {quote} > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > {quote} > The function "remove" does NOT remove the block from all queues. > Function add from LowRedundancyBlocks.java is used on some places and maybe > one block in two or more queues. > We found that corrupt blocks mismatch corrupt files on NN web UI. Maybe it is > related to this. > Upload initial patch -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14852) Removing from LowRedundancyBlocks does not remove the block from all queues
[ https://issues.apache.org/jira/browse/HDFS-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-14852: - Fix Version/s: 3.1.5 3.4.0 3.3.1 3.2.2 > Removing from LowRedundancyBlocks does not remove the block from all queues > --- > > Key: HDFS-14852 > URL: https://issues.apache.org/jira/browse/HDFS-14852 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5 > > Attachments: CorruptBlocksMismatch.png, HDFS-14852.001.patch, > HDFS-14852.002.patch, HDFS-14852.003.patch, HDFS-14852.004.patch, > HDFS-14852.005.patch, HDFS-14852.006.patch, HDFS-14852.007.patch, > screenshot-1.png > > > LowRedundancyBlocks.java > {code:java} > // Some comments here > if(priLevel >= 0 && priLevel < LEVEL > && priorityQueues.get(priLevel).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block {}" > + " from priority queue {}", > block, priLevel); > decrementBlockStat(block, priLevel, oldExpectedReplicas); > return true; > } else { > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > for (int i = 0; i < LEVEL; i++) { > if (i != priLevel && priorityQueues.get(i).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block" + > " {} from priority queue {}", block, i); > decrementBlockStat(block, i, oldExpectedReplicas); > return true; > } > } > } > return false; > } > {code} > Source code is above, the comments as follow > {quote} > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > {quote} > The function "remove" does NOT remove the block from all queues. > Function add from LowRedundancyBlocks.java is used on some places and maybe > one block in two or more queues. > We found that corrupt blocks mismatch corrupt files on NN web UI. Maybe it is > related to this. > Upload initial patch -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15536) RBF: Clear Quota in Router was not consistent
[ https://issues.apache.org/jira/browse/HDFS-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184114#comment-17184114 ] Hemanth Boyina commented on HDFS-15536: --- thanks for the review [~elgoiri] updated the patch , please review > RBF: Clear Quota in Router was not consistent > --- > > Key: HDFS-15536 > URL: https://issues.apache.org/jira/browse/HDFS-15536 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: HDFS-15536.001.patch, HDFS-15536.002.patch, > HDFS-15536.003.patch, HDFS-15536.testrepro.patch > > > *)create a mount point > *) set quota for mount point through dfsrouteradmin > *) clear quota for the same mount point through dfsrouteradmin > check the content summary of mount point , quota was not cleared , though the > mount table store has the quota cleared -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15536) RBF: Clear Quota in Router was not consistent
[ https://issues.apache.org/jira/browse/HDFS-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HDFS-15536: -- Attachment: HDFS-15536.003.patch > RBF: Clear Quota in Router was not consistent > --- > > Key: HDFS-15536 > URL: https://issues.apache.org/jira/browse/HDFS-15536 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: HDFS-15536.001.patch, HDFS-15536.002.patch, > HDFS-15536.003.patch, HDFS-15536.testrepro.patch > > > *)create a mount point > *) set quota for mount point through dfsrouteradmin > *) clear quota for the same mount point through dfsrouteradmin > check the content summary of mount point , quota was not cleared , though the > mount table store has the quota cleared -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14852) Removing from LowRedundancyBlocks does not remove the block from all queues
[ https://issues.apache.org/jira/browse/HDFS-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-14852: - Summary: Removing from LowRedundancyBlocks does not remove the block from all queues (was: Remove of LowRedundancyBlocks do NOT remove the block from all queues) > Removing from LowRedundancyBlocks does not remove the block from all queues > --- > > Key: HDFS-14852 > URL: https://issues.apache.org/jira/browse/HDFS-14852 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: CorruptBlocksMismatch.png, HDFS-14852.001.patch, > HDFS-14852.002.patch, HDFS-14852.003.patch, HDFS-14852.004.patch, > HDFS-14852.005.patch, HDFS-14852.006.patch, HDFS-14852.007.patch, > screenshot-1.png > > > LowRedundancyBlocks.java > {code:java} > // Some comments here > if(priLevel >= 0 && priLevel < LEVEL > && priorityQueues.get(priLevel).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block {}" > + " from priority queue {}", > block, priLevel); > decrementBlockStat(block, priLevel, oldExpectedReplicas); > return true; > } else { > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > for (int i = 0; i < LEVEL; i++) { > if (i != priLevel && priorityQueues.get(i).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block" + > " {} from priority queue {}", block, i); > decrementBlockStat(block, i, oldExpectedReplicas); > return true; > } > } > } > return false; > } > {code} > Source code is above, the comments as follow > {quote} > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > {quote} > The function "remove" does NOT remove the block from all queues. > Function add from LowRedundancyBlocks.java is used on some places and maybe > one block in two or more queues. > We found that corrupt blocks mismatch corrupt files on NN web UI. Maybe it is > related to this. > Upload initial patch -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15117) EC: Add getECTopologyResultForPolicies to DistributedFileSystem
[ https://issues.apache.org/jira/browse/HDFS-15117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184048#comment-17184048 ] Ayush Saxena commented on HDFS-15117: - We can merge, but as of now the base patches are itself not in 3.2 . HDFS-12946,14061,14125,14188 and many more dependent stuffs would even be required for this. > EC: Add getECTopologyResultForPolicies to DistributedFileSystem > --- > > Key: HDFS-15117 > URL: https://issues.apache.org/jira/browse/HDFS-15117 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: ec > Fix For: 3.3.0 > > Attachments: HDFS-15117-01.patch, HDFS-15117-02.patch, > HDFS-15117-03.patch, HDFS-15117-04.patch, HDFS-15117-05.patch, > HDFS-15117-06.patch, HDFS-15117-07.patch, HDFS-15117-08.patch > > > Add getECTopologyResultForPolicies API to distributed filesystem. > It is as of now only present as part of ECAdmin. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184043#comment-17184043 ] Hadoop QA commented on HDFS-14694: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 57s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 26s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 33s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 41s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 6s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 32s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || |
[jira] [Commented] (HDFS-15510) RBF: Quota and Content Summary was not correct in Multiple Destinations
[ https://issues.apache.org/jira/browse/HDFS-15510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17184030#comment-17184030 ] Hemanth Boyina commented on HDFS-15510: --- [~elgoiri] [~tasanuma] can you review the patch > RBF: Quota and Content Summary was not correct in Multiple Destinations > --- > > Key: HDFS-15510 > URL: https://issues.apache.org/jira/browse/HDFS-15510 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Critical > Attachments: 15510.png, HDFS-15510.001.patch, HDFS-15510.002.patch, > HDFS-15510.003.patch > > > steps : > *) create a mount entry with multiple destinations ( for suppose 2) > *) Set NS quota as 10 for mount entry by dfsrouteradmin command, Content > Summary on the Mount Entry shows NS quota as 20 > *) Create 10 files through router, on creating 11th file , NS Quota Exceeded > Exception is coming > though the Content Summary showing the NS quota as 20 , we are not able to > create 20 files > > the problem here is router stores the mount entry's NS quota as 10 , but > invokes NS quota on both the name services by set NS quota as 10 , so content > summary on mount entry aggregates the content summary of both the name > services by making NS quota as 20 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun reassigned HDFS-14694: -- Assignee: Lisheng Sun (was: Chen Zhang) > Call recoverLease on DFSOutputStream close exception > > > Key: HDFS-14694 > URL: https://issues.apache.org/jira/browse/HDFS-14694 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chen Zhang >Assignee: Lisheng Sun >Priority: Major > Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, > HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch > > > HDFS uses file-lease to manage opened files, when a file is not closed > normally, NN will recover lease automatically after hard limit exceeded. But > for a long running service(e.g. HBase), the hdfs-client will never die and NN > don't have any chances to recover the file. > Usually client program needs to handle exceptions by themself to avoid this > condition(e.g. HBase automatically call recover lease for files that not > closed normally), but in our experience, most services (in our company) don't > process this condition properly, which will cause lots of files in abnormal > status or even data loss. > This Jira propose to add a feature that call recoverLease operation > automatically when DFSOutputSteam close encounters exception. It should be > disabled by default, but when somebody builds a long-running service based on > HDFS, they can enable this option. > We've add this feature to our internal Hadoop distribution for more than 3 > years, it's quite useful according our experience. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183915#comment-17183915 ] Lisheng Sun commented on HDFS-14694: Hi [~zhangchen] Are you still working on this jira? If not, i will take over it. Hope you don't mind. > Call recoverLease on DFSOutputStream close exception > > > Key: HDFS-14694 > URL: https://issues.apache.org/jira/browse/HDFS-14694 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chen Zhang >Assignee: Chen Zhang >Priority: Major > Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, > HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch > > > HDFS uses file-lease to manage opened files, when a file is not closed > normally, NN will recover lease automatically after hard limit exceeded. But > for a long running service(e.g. HBase), the hdfs-client will never die and NN > don't have any chances to recover the file. > Usually client program needs to handle exceptions by themself to avoid this > condition(e.g. HBase automatically call recover lease for files that not > closed normally), but in our experience, most services (in our company) don't > process this condition properly, which will cause lots of files in abnormal > status or even data loss. > This Jira propose to add a feature that call recoverLease operation > automatically when DFSOutputSteam close encounters exception. It should be > disabled by default, but when somebody builds a long-running service based on > HDFS, they can enable this option. > We've add this feature to our internal Hadoop distribution for more than 3 > years, it's quite useful according our experience. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HDFS-14694: --- Attachment: HDFS-14694.005.patch > Call recoverLease on DFSOutputStream close exception > > > Key: HDFS-14694 > URL: https://issues.apache.org/jira/browse/HDFS-14694 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chen Zhang >Assignee: Chen Zhang >Priority: Major > Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, > HDFS-14694.003.patch, HDFS-14694.004.patch, HDFS-14694.005.patch > > > HDFS uses file-lease to manage opened files, when a file is not closed > normally, NN will recover lease automatically after hard limit exceeded. But > for a long running service(e.g. HBase), the hdfs-client will never die and NN > don't have any chances to recover the file. > Usually client program needs to handle exceptions by themself to avoid this > condition(e.g. HBase automatically call recover lease for files that not > closed normally), but in our experience, most services (in our company) don't > process this condition properly, which will cause lots of files in abnormal > status or even data loss. > This Jira propose to add a feature that call recoverLease operation > automatically when DFSOutputSteam close encounters exception. It should be > disabled by default, but when somebody builds a long-running service based on > HDFS, they can enable this option. > We've add this feature to our internal Hadoop distribution for more than 3 > years, it's quite useful according our experience. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14852) Remove of LowRedundancyBlocks do NOT remove the block from all queues
[ https://issues.apache.org/jira/browse/HDFS-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183901#comment-17183901 ] Akira Ajisaka commented on HDFS-14852: -- The test failures are not related to the patch. I ran the failed tests locally and all of them except TestHDFSContractMultipartUploader (HDFS-15471) passed. > Remove of LowRedundancyBlocks do NOT remove the block from all queues > - > > Key: HDFS-14852 > URL: https://issues.apache.org/jira/browse/HDFS-14852 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: CorruptBlocksMismatch.png, HDFS-14852.001.patch, > HDFS-14852.002.patch, HDFS-14852.003.patch, HDFS-14852.004.patch, > HDFS-14852.005.patch, HDFS-14852.006.patch, HDFS-14852.007.patch, > screenshot-1.png > > > LowRedundancyBlocks.java > {code:java} > // Some comments here > if(priLevel >= 0 && priLevel < LEVEL > && priorityQueues.get(priLevel).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block {}" > + " from priority queue {}", > block, priLevel); > decrementBlockStat(block, priLevel, oldExpectedReplicas); > return true; > } else { > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > for (int i = 0; i < LEVEL; i++) { > if (i != priLevel && priorityQueues.get(i).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block" + > " {} from priority queue {}", block, i); > decrementBlockStat(block, i, oldExpectedReplicas); > return true; > } > } > } > return false; > } > {code} > Source code is above, the comments as follow > {quote} > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > {quote} > The function "remove" does NOT remove the block from all queues. > Function add from LowRedundancyBlocks.java is used on some places and maybe > one block in two or more queues. > We found that corrupt blocks mismatch corrupt files on NN web UI. Maybe it is > related to this. > Upload initial patch -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14852) Remove of LowRedundancyBlocks do NOT remove the block from all queues
[ https://issues.apache.org/jira/browse/HDFS-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183885#comment-17183885 ] Stephen O'Donnell commented on HDFS-14852: -- +1 on v7 from me too. > Remove of LowRedundancyBlocks do NOT remove the block from all queues > - > > Key: HDFS-14852 > URL: https://issues.apache.org/jira/browse/HDFS-14852 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: CorruptBlocksMismatch.png, HDFS-14852.001.patch, > HDFS-14852.002.patch, HDFS-14852.003.patch, HDFS-14852.004.patch, > HDFS-14852.005.patch, HDFS-14852.006.patch, HDFS-14852.007.patch, > screenshot-1.png > > > LowRedundancyBlocks.java > {code:java} > // Some comments here > if(priLevel >= 0 && priLevel < LEVEL > && priorityQueues.get(priLevel).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block {}" > + " from priority queue {}", > block, priLevel); > decrementBlockStat(block, priLevel, oldExpectedReplicas); > return true; > } else { > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > for (int i = 0; i < LEVEL; i++) { > if (i != priLevel && priorityQueues.get(i).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block" + > " {} from priority queue {}", block, i); > decrementBlockStat(block, i, oldExpectedReplicas); > return true; > } > } > } > return false; > } > {code} > Source code is above, the comments as follow > {quote} > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > {quote} > The function "remove" does NOT remove the block from all queues. > Function add from LowRedundancyBlocks.java is used on some places and maybe > one block in two or more queues. > We found that corrupt blocks mismatch corrupt files on NN web UI. Maybe it is > related to this. > Upload initial patch -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lisheng Sun updated HDFS-14694: --- Comment: was deleted (was: Hi [~zhangchen] Are you still working on this jira? If not, i will take over it. Hope you don't mind.) > Call recoverLease on DFSOutputStream close exception > > > Key: HDFS-14694 > URL: https://issues.apache.org/jira/browse/HDFS-14694 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chen Zhang >Assignee: Chen Zhang >Priority: Major > Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, > HDFS-14694.003.patch, HDFS-14694.004.patch > > > HDFS uses file-lease to manage opened files, when a file is not closed > normally, NN will recover lease automatically after hard limit exceeded. But > for a long running service(e.g. HBase), the hdfs-client will never die and NN > don't have any chances to recover the file. > Usually client program needs to handle exceptions by themself to avoid this > condition(e.g. HBase automatically call recover lease for files that not > closed normally), but in our experience, most services (in our company) don't > process this condition properly, which will cause lots of files in abnormal > status or even data loss. > This Jira propose to add a feature that call recoverLease operation > automatically when DFSOutputSteam close encounters exception. It should be > disabled by default, but when somebody builds a long-running service based on > HDFS, they can enable this option. > We've add this feature to our internal Hadoop distribution for more than 3 > years, it's quite useful according our experience. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14694) Call recoverLease on DFSOutputStream close exception
[ https://issues.apache.org/jira/browse/HDFS-14694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183879#comment-17183879 ] Lisheng Sun commented on HDFS-14694: Hi [~zhangchen] Are you still working on this jira? If not, i will take over it. Hope you don't mind. > Call recoverLease on DFSOutputStream close exception > > > Key: HDFS-14694 > URL: https://issues.apache.org/jira/browse/HDFS-14694 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client >Reporter: Chen Zhang >Assignee: Chen Zhang >Priority: Major > Attachments: HDFS-14694.001.patch, HDFS-14694.002.patch, > HDFS-14694.003.patch, HDFS-14694.004.patch > > > HDFS uses file-lease to manage opened files, when a file is not closed > normally, NN will recover lease automatically after hard limit exceeded. But > for a long running service(e.g. HBase), the hdfs-client will never die and NN > don't have any chances to recover the file. > Usually client program needs to handle exceptions by themself to avoid this > condition(e.g. HBase automatically call recover lease for files that not > closed normally), but in our experience, most services (in our company) don't > process this condition properly, which will cause lots of files in abnormal > status or even data loss. > This Jira propose to add a feature that call recoverLease operation > automatically when DFSOutputSteam close encounters exception. It should be > disabled by default, but when somebody builds a long-running service based on > HDFS, they can enable this option. > We've add this feature to our internal Hadoop distribution for more than 3 > years, it's quite useful according our experience. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14852) Remove of LowRedundancyBlocks do NOT remove the block from all queues
[ https://issues.apache.org/jira/browse/HDFS-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183872#comment-17183872 ] Akira Ajisaka commented on HDFS-14852: -- The v7 patch makes sense to me. +1 > Remove of LowRedundancyBlocks do NOT remove the block from all queues > - > > Key: HDFS-14852 > URL: https://issues.apache.org/jira/browse/HDFS-14852 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Major > Attachments: CorruptBlocksMismatch.png, HDFS-14852.001.patch, > HDFS-14852.002.patch, HDFS-14852.003.patch, HDFS-14852.004.patch, > HDFS-14852.005.patch, HDFS-14852.006.patch, HDFS-14852.007.patch, > screenshot-1.png > > > LowRedundancyBlocks.java > {code:java} > // Some comments here > if(priLevel >= 0 && priLevel < LEVEL > && priorityQueues.get(priLevel).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block {}" > + " from priority queue {}", > block, priLevel); > decrementBlockStat(block, priLevel, oldExpectedReplicas); > return true; > } else { > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > for (int i = 0; i < LEVEL; i++) { > if (i != priLevel && priorityQueues.get(i).remove(block)) { > NameNode.blockStateChangeLog.debug( > "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block" + > " {} from priority queue {}", block, i); > decrementBlockStat(block, i, oldExpectedReplicas); > return true; > } > } > } > return false; > } > {code} > Source code is above, the comments as follow > {quote} > // Try to remove the block from all queues if the block was > // not found in the queue for the given priority level. > {quote} > The function "remove" does NOT remove the block from all queues. > Function add from LowRedundancyBlocks.java is used on some places and maybe > one block in two or more queues. > We found that corrupt blocks mismatch corrupt files on NN web UI. Maybe it is > related to this. > Upload initial patch -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15117) EC: Add getECTopologyResultForPolicies to DistributedFileSystem
[ https://issues.apache.org/jira/browse/HDFS-15117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17183789#comment-17183789 ] Uma Maheswara Rao G commented on HDFS-15117: [~ayushtkn] Can we merge this to 3.2 as well? I realized ViewDistributedFileSystem provided one API which was introduced in this class. If we want to back-port to 3.2, we may need this in 3.2 to keep clean merge. Thanks > EC: Add getECTopologyResultForPolicies to DistributedFileSystem > --- > > Key: HDFS-15117 > URL: https://issues.apache.org/jira/browse/HDFS-15117 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: ec > Fix For: 3.3.0 > > Attachments: HDFS-15117-01.patch, HDFS-15117-02.patch, > HDFS-15117-03.patch, HDFS-15117-04.patch, HDFS-15117-05.patch, > HDFS-15117-06.patch, HDFS-15117-07.patch, HDFS-15117-08.patch > > > Add getECTopologyResultForPolicies API to distributed filesystem. > It is as of now only present as part of ECAdmin. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org