[jira] [Commented] (HDFS-10506) OIV's ReverseXML processor cannot reconstruct some snapshot details
[ https://issues.apache.org/jira/browse/HDFS-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967162#comment-15967162 ] Akira Ajisaka commented on HDFS-10506: -- Hi [~andrew.wang] and [~jojochuang], I'd like to backport this issue to 2.8.1. Possible incompatible issue is that XML files generated by 2.8.1 XML processor cannot be parsed by 2.8.0 ReverseXML processor, however, I'm thinking the situation is not likely happen. What do you think? > OIV's ReverseXML processor cannot reconstruct some snapshot details > --- > > Key: HDFS-10506 > URL: https://issues.apache.org/jira/browse/HDFS-10506 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools >Affects Versions: 2.8.0 >Reporter: Colin P. McCabe >Assignee: Akira Ajisaka > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-10506.01.patch, HDFS-10506.02.patch, > HDFS-10506.03.patch, HDFS-10506.04.patch, HDFS-10506-addendum.patch, > HDFS-10506-branch-2.01.patch > > > OIV's ReverseXML processor cannot reconstruct some snapshot details. > Specifically, should contain a and field, > but does not. should contain a field. OIV also > needs to be changed to emit these fields into the XML (they are currently > missing). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11631) Block Storage : allow cblock server to be started from hdfs command
[ https://issues.apache.org/jira/browse/HDFS-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967160#comment-15967160 ] Anu Engineer commented on HDFS-11631: - * Replace commands with something like cli ? hadoop_add_subcommand "cblock" "cblock commands" ==> hadoop_add_subcommand "cblock" "cblock cli" * in {{CBlockManager.java}}: We can use OzoneConsts.GB setContainerSizeB(containerSizeGB*1024*1024*1024L); ==> setContainerSizeB(containerSizeGB * OzoneConsts.GB); * Can we please read this value from a Default key ? ozoneConf.set(OzoneConfigKeys.OZONE_LOCALSTORAGE_ROOT, "/tmp/cblockSCM"); * This should be removed -- ozoneConf.setBoolean(OzoneConfigKeys.OZONE_ENABLED, true); > Block Storage : allow cblock server to be started from hdfs command > --- > > Key: HDFS-11631 > URL: https://issues.apache.org/jira/browse/HDFS-11631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11631-HDFS-7240.001.patch > > > This JIRA adds CBlock main() method, also adds entry to hdfs script, such > that cblock server can be started by hdfs script and run as a daemon process. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11582) Block Storage : add SCSI target access daemon
[ https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967155#comment-15967155 ] Anu Engineer commented on HDFS-11582: - [~vagarychen] Thank you for updating the patch. Two minor comments. * CBlockTargetServer.java Nit: Typo : {{fllushListenerThread}} ==> flushListenerThread * The user string seems to be same for two different keys. {code} public static final String OZONE_SCM_CLIENT_PORT_KEY = "ozone.scm.client.port"; public static final int OZONE_SCM_CLIENT_PORT_DEFAULT = 9860; public static final String OZONE_SCM_DATANODE_PORT_KEY = "ozone.scm.client.port"; public static final int OZONE_SCM_DATANODE_PORT_DEFAULT = 9861; {code} > Block Storage : add SCSI target access daemon > - > > Key: HDFS-11582 > URL: https://issues.apache.org/jira/browse/HDFS-11582 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11582-HDFS-7240.001.patch, > HDFS-11582-HDFS-7240.002.patch, HDFS-11582-HDFS-7240.003.patch, > HDFS-11582-HDFS-7240.004.patch, HDFS-11582-HDFS-7240.005.patch, > HDFS-11582-HDFS-7240.006.patch > > > This JIRA adds the daemon process that exposes SCSI target access. More > specifically, with this daemon process running, any OS with SCSI can talk to > this daemon process and treat CBlock volumes as SCSI targets, in this way the > user can mount the volume just like the POSIX manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11604) Define and parse erasure code codecs, schemas and policies
[ https://issues.apache.org/jira/browse/HDFS-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967134#comment-15967134 ] Hadoop QA commented on HDFS-11604: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 11s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 54s{color} | {color:orange} root: The patch generated 25 new + 74 unchanged - 0 fixed = 99 total (was 74) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 26s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-11604 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863190/HDFS-11604-v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 8baf71e76d57 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0cab572 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle |
[jira] [Commented] (HDFS-11569) Ozone: Implement listKey function for KeyManager
[ https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967102#comment-15967102 ] Weiwei Yang commented on HDFS-11569: Thanks [~anu]! > Ozone: Implement listKey function for KeyManager > > > Key: HDFS-11569 > URL: https://issues.apache.org/jira/browse/HDFS-11569 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: HDFS-7240 > > Attachments: HDFS-11569-HDFS-7240.001.patch, > HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, > HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch, > HDFS-11569-HDFS-7240.006.patch, HDFS-11569-HDFS-7240.007.patch > > > List keys by prefix from a container. This will need to support pagination > for the purpose of small object support. So the listKey function returns > something like ListKeyResult, client can iterate the object to get pagination > results. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11649) Ozone: SCM: CLI: Add shell code placeholder classes
[ https://issues.apache.org/jira/browse/HDFS-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967093#comment-15967093 ] Weiwei Yang commented on HDFS-11649: Hi [~vagarychen] Thanks for starting working on this, it is a good start. I did a quick look and I have following comments/questions, 1) The class hierarchy, with the respect to the [design doc|https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf], I don't think Scm CLI should be first layer class, how about {noformat} OzoneCLI extends Configured implements Tools KsmCLI extends OzoneCLI ScmCLI extends OzoneCLI ScmAdminCLI extends ScmCLI {noformat} each CLI implementation is supposed to be initialized with a client that talks to ksm or scm (like you implemented in {{SCMCLI}}). 2) Suggest to add a check in each CLI implementation to make sure the caller has the privilege to run this command. As [~anu] in [this comment|https://issues.apache.org/jira/browse/HDFS-11470?focusedCommentId=15950247=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15950247] mentioned for now we limit only admin user "hdfs" could run these commands, can we get this addressed? 3) We need some common abstractions to handle common arguments, such as {{-- help}}, {{-- usage}}. When user inputs a wrong number of arguments or incorrect format of arguments, use the common handler to print the USAGE; when user inputs {{-- help}}, use the common handler to print the help message. 4) Return code. Each handler should have a definition of return code while handling certain sort of failures, instead of simply return 0 or 1. Right now in your patch, {{SCMCli}} seems to return 0 when succeed, and 1 when failed. 5). We need to be able to set output stream for CLIs, that is useful in the unit tests and also give us the flexibility to redirect output to local files. 6) Minor : name convention. Should we use "CLI" all capital words in class names? Can we rename class {{Handler}} to something more specific? Such as {{OzoneCommandHandler}} ? Thanks Weiwei > Ozone: SCM: CLI: Add shell code placeholder classes > > > Key: HDFS-11649 > URL: https://issues.apache.org/jira/browse/HDFS-11649 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11649-HDFS-7240.001.patch, > HDFS-11649-HDFS-7240.002.patch > > > HDFS-11470 has outlined how the SCM CLI would look like. Based on the design, > this JIRA adds the basic placeholder classes for all commands to be filled in. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
[ https://issues.apache.org/jira/browse/HDFS-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967045#comment-15967045 ] Hadoop QA commented on HDFS-11652: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 0s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 44s{color} | {color:orange} root: The patch generated 5 new + 7 unchanged - 1 fixed = 12 total (was 8) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 16s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-11652 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863187/HDFS-11652.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2f789eb4a895 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0cab572 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19073/artifact/patchprocess/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19073/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client U: . | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19073/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. >
[jira] [Commented] (HDFS-11615) FSNamesystemLock metrics can be inaccurate due to millisecond precision
[ https://issues.apache.org/jira/browse/HDFS-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967018#comment-15967018 ] Hadoop QA commented on HDFS-11615: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-11615 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863174/HDFS-11615.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 8779bbd68de0 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0cab572 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19072/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19072/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19072/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > FSNamesystemLock metrics can be inaccurate due to millisecond precision > --- > > Key:
[jira] [Updated] (HDFS-11604) Define and parse erasure code codecs, schemas and policies
[ https://issues.apache.org/jira/browse/HDFS-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Zeng updated HDFS-11604: Attachment: HDFS-11604-v1.patch > Define and parse erasure code codecs, schemas and policies > -- > > Key: HDFS-11604 > URL: https://issues.apache.org/jira/browse/HDFS-11604 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Kai Zheng >Assignee: Lin Zeng > Fix For: 3.0.0-alpha3 > > Attachments: ec-config-sample.xml, ec-policy-config-sample-v2.xml, > HDFS-11604-v1.patch > > > According to recent discussions with [~andrew.wang] in HDFS-7337, it would be > good to support allowing users to define their own erasure code codecs, > schemas and policies via an XML file. The XML file can be passed to a CLI cmd > to parse and send to NameNode to persist and maintain. > Open this task to define the XML format providing a default sample file to > put in the configuration folder for users' reference, and implement the > necessary parser utility. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11604) Define and parse erasure code codecs, schemas and policies
[ https://issues.apache.org/jira/browse/HDFS-11604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lin Zeng updated HDFS-11604: Attachment: (was: HDFS-11664-v1.patch) > Define and parse erasure code codecs, schemas and policies > -- > > Key: HDFS-11604 > URL: https://issues.apache.org/jira/browse/HDFS-11604 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Reporter: Kai Zheng >Assignee: Lin Zeng > Fix For: 3.0.0-alpha3 > > Attachments: ec-config-sample.xml, ec-policy-config-sample-v2.xml > > > According to recent discussions with [~andrew.wang] in HDFS-7337, it would be > good to support allowing users to define their own erasure code codecs, > schemas and policies via an XML file. The XML file can be passed to a CLI cmd > to parse and send to NameNode to persist and maintain. > Open this task to define the XML format providing a default sample file to > put in the configuration folder for users' reference, and implement the > necessary parser utility. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15967005#comment-15967005 ] SammiChen commented on HDFS-10996: -- Thanks [~andrew.wang] for the review! > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, > HDFS-10996-v3.patch, HDFS-10996-v4.patch, HDFS-10996-v5.patch, > HDFS-10996-v6.patch > > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
[ https://issues.apache.org/jira/browse/HDFS-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11652: --- Attachment: HDFS-11652.002.patch Few mistakes crept in, 002 patch. > Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals > --- > > Key: HDFS-11652 > URL: https://issues.apache.org/jira/browse/HDFS-11652 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Minor > Attachments: HDFS-11652.001.patch, HDFS-11652.002.patch > > > Some small cleanups to these methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
[ https://issues.apache.org/jira/browse/HDFS-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11652: --- Attachment: HDFS-11652.001.patch > Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals > --- > > Key: HDFS-11652 > URL: https://issues.apache.org/jira/browse/HDFS-11652 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Minor > Attachments: HDFS-11652.001.patch > > > Some small cleanups to these methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
[ https://issues.apache.org/jira/browse/HDFS-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966986#comment-15966986 ] Andrew Wang commented on HDFS-11652: [~jojochuang] you noticed the ID issue over on another JIRA, mind reviewing this one? > Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals > --- > > Key: HDFS-11652 > URL: https://issues.apache.org/jira/browse/HDFS-11652 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Minor > Attachments: HDFS-11652.001.patch > > > Some small cleanups to these methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
[ https://issues.apache.org/jira/browse/HDFS-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11652: --- Attachment: (was: HDFS-11652.001.patch) > Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals > --- > > Key: HDFS-11652 > URL: https://issues.apache.org/jira/browse/HDFS-11652 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Minor > Attachments: HDFS-11652.001.patch > > > Some small cleanups to these methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
[ https://issues.apache.org/jira/browse/HDFS-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11652: --- Status: Patch Available (was: Open) > Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals > --- > > Key: HDFS-11652 > URL: https://issues.apache.org/jira/browse/HDFS-11652 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Minor > Attachments: HDFS-11652.001.patch > > > Some small cleanups to these methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
[ https://issues.apache.org/jira/browse/HDFS-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11652: --- Attachment: HDFS-11652.001.patch Patch attached. * Switched over to using EqualsBuilder and HashBuilder * Added unit tests * Added "id" field in ECPolicy to toString, equals, hashCode > Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals > --- > > Key: HDFS-11652 > URL: https://issues.apache.org/jira/browse/HDFS-11652 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Minor > Attachments: HDFS-11652.001.patch > > > Some small cleanups to these methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11603) Improve slow mirror/disk warnings in BlockReceiver
[ https://issues.apache.org/jira/browse/HDFS-11603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11603: - Attachment: HDFS-11603-branch-2.01.patch Attaching a branch-2 patch. This needed a slight change to {{BlockReceiver#getVolumeBasePath}} since Replica#getReplicaInfo is not available in branch-2. > Improve slow mirror/disk warnings in BlockReceiver > -- > > Key: HDFS-11603 > URL: https://issues.apache.org/jira/browse/HDFS-11603 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11603.01.patch, HDFS-11603.02.patch, > HDFS-11603.03.patch, HDFS-11603-branch-2.01.patch > > > The slow mirror warnings in the DataNode BlockReceiver should include the > downstream DataNodeIDs. > Similarly, the slow disk warnings should include the volume path. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11652) Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals
Andrew Wang created HDFS-11652: -- Summary: Improve ECSchema and ErasureCodingPolicy toString, hashCode, equals Key: HDFS-11652 URL: https://issues.apache.org/jira/browse/HDFS-11652 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Affects Versions: 3.0.0-alpha2 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor Some small cleanups to these methods. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.
[ https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966967#comment-15966967 ] Vinitha Reddy Gankidi commented on HDFS-11634: -- It's a good improvement. One minor nit: {{index}} is initialized to zero twice [~zhz] raised a good point. It seems like we don't need the iterators for the skipped storages. > Optimize BlockIterator when interating starts in the middle. > > > Key: HDFS-11634 > URL: https://issues.apache.org/jira/browse/HDFS-11634 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Attachments: HDFS-11634.001.patch, HDFS-11634.002.patch, > HDFS-11634.003.patch, HDFS-11634.004.patch > > > {{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a > randomly selected {{startBlock}} index. It creates an iterator which points > to the first block and then skips all blocks until {{startBlock}}. It is > inefficient when DN has multiple storages. Instead of skipping blocks one by > one we can skip entire storages. Should be more efficient on average. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966960#comment-15966960 ] Hadoop QA commented on HDFS-10999: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 2s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 15 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 36s{color} | {color:red} hadoop-hdfs-project generated 37 new + 55 unchanged - 0 fixed = 92 total (was 55) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 1034 unchanged - 17 fixed = 1038 total (was 1051) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.tools.TestJMXGet | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-10999 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863155/HDFS-10999.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 8196a62c1f50 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0cab572 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/19070/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project.txt | | checkstyle |
[jira] [Commented] (HDFS-11582) Block Storage : add SCSI target access daemon
[ https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966930#comment-15966930 ] Hadoop QA commented on HDFS-11582: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 52s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 13s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 13s{color} | {color:green} The patch generated 0 new + 104 unchanged - 1 fixed = 104 total (was 105) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 43s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}135m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11582 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863147/HDFS-11582-HDFS-7240.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle shellcheck shelldocs | | uname | Linux
[jira] [Commented] (HDFS-11644) DFSStripedOutputStream should not implement Syncable
[ https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966925#comment-15966925 ] Andrew Wang commented on HDFS-11644: Good idea on isSyncable. I'm still peeved that checking {{instanceof Syncable}} doesn't work, but a runtime check is okay too. There are compatibility implications for changing existing implementations to newly throw a runtime exception, so I think the right answer is to have them all fallback to flush or a no-op. Since we're using Java 8, I think we can finally compatibly add methods to an existing interface! Exciting. https://dzone.com/articles/interface-default-methods-java > DFSStripedOutputStream should not implement Syncable > > > Key: HDFS-11644 > URL: https://issues.apache.org/jira/browse/HDFS-11644 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-must-do > > FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, > calls hsync. Otherwise, it just calls flush. This is used, for instance, by > YARN's FileSystemTimelineWriter. > DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. > However, DFSStripedOS throws a runtime exception when the Syncable methods > are called. > We should refactor the inheritance structure so DFSStripedOS does not > implement Syncable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.
[ https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966916#comment-15966916 ] Zhe Zhang commented on HDFS-11634: -- Thanks [~shv] for the patch and [~shahrs87] for the review. The idea looks good to me, it's a good improvement to skip entire storages. I only have one concern/question: {code} for (DatanodeStorageInfo e : storages) { iterators.add(e.getBlockIterator()); int numBlocks = e.numBlocks(); sumBlocks += numBlocks; if(sumBlocks <= startBlock) { index++; s -= numBlocks; } } {code} If a storage is skipped, should we still add it to the {{iterators}}? > Optimize BlockIterator when interating starts in the middle. > > > Key: HDFS-11634 > URL: https://issues.apache.org/jira/browse/HDFS-11634 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Attachments: HDFS-11634.001.patch, HDFS-11634.002.patch, > HDFS-11634.003.patch, HDFS-11634.004.patch > > > {{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a > randomly selected {{startBlock}} index. It creates an iterator which points > to the first block and then skips all blocks until {{startBlock}}. It is > inefficient when DN has multiple storages. Instead of skipping blocks one by > one we can skip entire storages. Should be more efficient on average. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
[ https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11630: - Fix Version/s: 2.9.0 Thanks [~hanishakoneru]. I've committed the branch-2 patch. > TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds > --- > > Key: HDFS-11630 > URL: https://issues.apache.org/jira/browse/HDFS-11630 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch, > HDFS-11630-branch-2.001.patch > > > TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly > fails intermittently in Jenkins builds. > We need to wait for disk checker timeout to callback the > FutureCallBack#onFailure. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11615) FSNamesystemLock metrics can be inaccurate due to millisecond precision
[ https://issues.apache.org/jira/browse/HDFS-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HDFS-11615: --- Attachment: HDFS-11615.001.patch Ah, thanks [~zhz], didn't realize there was precedent for this already. Sounds good then! Uploading v001 patch which adds Nanos to the name and marks all of the variables in the class with their unit. > FSNamesystemLock metrics can be inaccurate due to millisecond precision > --- > > Key: HDFS-11615 > URL: https://issues.apache.org/jira/browse/HDFS-11615 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-11615.000.patch, HDFS-11615.001.patch > > > Currently the {{FSNamesystemLock}} metrics created in HDFS-10872 track the > lock hold time using {{Timer.monotonicNow()}}, which has millisecond-level > precision. However, many of these operations hold the lock for less than a > millisecond, making these metrics inaccurate. We should instead use > {{System.nanoTime()}} for higher accuracy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike
[ https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966901#comment-15966901 ] Hadoop QA commented on HDFS-11384: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-11384 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863142/HDFS-11384.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6338c0ca2e43 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b053fdc | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19067/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19067/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19067/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add option for balancer to disperse getBlocks calls to avoid NameNode's > rpc.CallQueueLength spike > - > > Key: HDFS-11384 > URL: https://issues.apache.org/jira/browse/HDFS-11384 > Project: Hadoop HDFS > Issue Type: Improvement > Components:
[jira] [Updated] (HDFS-11560) Expose slow disks via NameNode JMX
[ https://issues.apache.org/jira/browse/HDFS-11560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-11560: - Fix Version/s: 2.9.0 Component/s: namenode +1 for the branch-2 patch, I've committed this. Thanks [~hanishakoneru]. > Expose slow disks via NameNode JMX > -- > > Key: HDFS-11560 > URL: https://issues.apache.org/jira/browse/HDFS-11560 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: HDFS-11560.001.patch, HDFS-11560.002.patch, > HDFS-11560-branch-2.001.patch > > > Each Datanode exposes its slow disks through Datanode JMX. We can expose the > overall slow disks (among all datanodes) via the NameNode JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.
[ https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966878#comment-15966878 ] Hadoop QA commented on HDFS-11634: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 39s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 70m 31s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}116m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-11634 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863143/HDFS-11634.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f7bf345c46a3 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b053fdc | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19066/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19066/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Optimize BlockIterator when interating starts in the middle. > > > Key: HDFS-11634 > URL: https://issues.apache.org/jira/browse/HDFS-11634 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Attachments: HDFS-11634.001.patch, HDFS-11634.002.patch, > HDFS-11634.003.patch, HDFS-11634.004.patch > > > {{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a > randomly selected {{startBlock}}
[jira] [Commented] (HDFS-11644) DFSStripedOutputStream should not implement Syncable
[ https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966871#comment-15966871 ] Manoj Govindassamy commented on HDFS-11644: --- [~andrew.wang], [~ste...@apache.org], IMHO, instead of assuming all Syncable implementations supporting hflush()/hsync(), we can have the interface additionally expose isSyncable() or similar which can be queried to find whether the stream supports hflush()/hsync() or not. And, maybe we should have uniform implementation for hsync/hflush that either fall back or throw exception. > DFSStripedOutputStream should not implement Syncable > > > Key: HDFS-11644 > URL: https://issues.apache.org/jira/browse/HDFS-11644 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-must-do > > FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, > calls hsync. Otherwise, it just calls flush. This is used, for instance, by > YARN's FileSystemTimelineWriter. > DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. > However, DFSStripedOS throws a runtime exception when the Syncable methods > are called. > We should refactor the inheritance structure so DFSStripedOS does not > implement Syncable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11569) Ozone: Implement listKey function for KeyManager
[ https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966764#comment-15966764 ] Anu Engineer edited comment on HDFS-11569 at 4/12/17 11:28 PM: --- [~cheersyang] Thanks for taking care of this issue. I have committed this to the feature branch. I have changed the IOException to StorageContainerException while committing, so we don't need a follow up JIRA. was (Author: anu): [~cheersyang] Thanks for taking care of this issue. I have committed this to the feature branch. I have changed the IOException to StorageContainerException while committing, so we don't need a follow up a JIRA. > Ozone: Implement listKey function for KeyManager > > > Key: HDFS-11569 > URL: https://issues.apache.org/jira/browse/HDFS-11569 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: HDFS-7240 > > Attachments: HDFS-11569-HDFS-7240.001.patch, > HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, > HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch, > HDFS-11569-HDFS-7240.006.patch, HDFS-11569-HDFS-7240.007.patch > > > List keys by prefix from a container. This will need to support pagination > for the purpose of small object support. So the listKey function returns > something like ListKeyResult, client can iterate the object to get pagination > results. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11615) FSNamesystemLock metrics can be inaccurate due to millisecond precision
[ https://issues.apache.org/jira/browse/HDFS-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966800#comment-15966800 ] Zhe Zhang commented on HDFS-11615: -- bq. "XxxNanosAvgTime" is reasonable but I'm hesitant to emit a metric called "XxxNanosNumOps"... DataNode already has multiple metrics named with that convention. E.g. {{FlushNanosAvgTime}}. So I guess that makes it less awkward :) > FSNamesystemLock metrics can be inaccurate due to millisecond precision > --- > > Key: HDFS-11615 > URL: https://issues.apache.org/jira/browse/HDFS-11615 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-11615.000.patch > > > Currently the {{FSNamesystemLock}} metrics created in HDFS-10872 track the > lock hold time using {{Timer.monotonicNow()}}, which has millisecond-level > precision. However, many of these operations hold the lock for less than a > millisecond, making these metrics inaccurate. We should instead use > {{System.nanoTime()}} for higher accuracy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11470) Ozone: SCM: CLI: Design SCM Command line interface
[ https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11470: Attachment: storage-container-manager-cli-003.pdf [~xyao] Thanks for the coments. The v3 version of the document addresses all issues. Please see my comments below also. bq. In section 1.2 for put key, can we change the the input data (with -i instead of -o)? Thanks for catching that fixed. bq. Can we move section 5 (Pipleline) before section 2 (Container) which has dependency on the Pipeline? Fixed. bq. We don't want to maintain empty pool without any nodes. In section 3.1, can we add a require parameter for -nodescreate pool while keeping the separate adding/removing command? When the number of nodes in a pool reaches 0, the pool will be removed as well. Should we do that ? That opens up for some inadvertent mistakes from the users. For example, I wanted to move a set of nodes from pool one to pool two and accidentally moved all nodes. We will then automatically delete pool one. I think being explicit on removal leads to less error prone user interface. bq. In section, can we add an optional -metric parameter to filter only metrics that are interested? Fixed. > Ozone: SCM: CLI: Design SCM Command line interface > -- > > Key: HDFS-11470 > URL: https://issues.apache.org/jira/browse/HDFS-11470 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Anu Engineer > Attachments: storage-container-manager-cli-003.pdf, > storage-container-manager-cli-v001.pdf, storage-container-manager-cli-v002.pdf > > > This jira the describes the SCM CLI. Since CLI will have lots of commands, we > will file other JIRAs for specific commands. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11402) HDFS Snapshots should capture point-in-time copies of OPEN files
[ https://issues.apache.org/jira/browse/HDFS-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966788#comment-15966788 ] Manoj Govindassamy commented on HDFS-11402: --- Above unit test failure is not related to the patch. Checkstyle issue is regarding the method definition being longer than 150 lines. > HDFS Snapshots should capture point-in-time copies of OPEN files > > > Key: HDFS-11402 > URL: https://issues.apache.org/jira/browse/HDFS-11402 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-11402.01.patch, HDFS-11402.02.patch, > HDFS-11402.03.patch, HDFS-11402.04.patch > > > *Problem:* > 1. When there are files being written and when HDFS Snapshots are taken in > parallel, Snapshots do capture all these files, but these being written files > in Snapshots do not have the point-in-time file length captured. That is, > these open files are not frozen in HDFS Snapshots. These open files > grow/shrink in length, just like the original file, even after the snapshot > time. > 2. At the time of File close or any other meta data modification operation on > these files, HDFS reconciles the file length and records the modification in > the last taken Snapshot. All the previously taken Snapshots continue to have > those open Files with no modification recorded. So, all those previous > snapshots end up using the final modification record in the last snapshot. > Thus after the file close, file lengths in all those snapshots will end up > same. > Assume File1 is opened for write and a total of 1MB written to it. While the > writes are happening, snapshots are taken in parallel. > {noformat} > |---Time---T1---T2-T3T4--> > |---Snap1--Snap2-Snap3---> > |---File1.open---write-write---close-> > {noformat} > Then at time, > T2: > Snap1.File1.length = 0 > T3: > Snap1.File1.length = 0 > Snap2.File1.length = 0 > > T4: > Snap1.File1.length = 1MB > Snap2.File1.length = 1MB > Snap3.File1.length = 1MB > *Proposal* > 1. At the time of taking Snapshot, {{SnapshotManager#createSnapshot}} can > optionally request {{DirectorySnapshottableFeature#addSnapshot}} to freeze > open files. > 2. {{DirectorySnapshottableFeature#addSnapshot}} can consult with > {{LeaseManager}} and get a list INodesInPath for all open files under the > snapshot dir. > 3. {{DirectorySnapshottableFeature#addSnapshot}} after the Snapshot creation, > Diff creation and updating modification time, can invoke > {{INodeFile#recordModification}} for each of the open files. This way, the > Snapshot just taken will have a {{FileDiff}} with {{fileSize}} captured for > each of the open files. > 4. Above model follows the current Snapshot and Diff protocols and doesn't > introduce any any disk formats. So, I don't think we will be needing any new > FSImage Loader/Saver changes for Snapshots. > 5. One of the design goals of HDFS Snapshot was ability to take any number of > snapshots in O(1) time. LeaseManager though has all the open files with > leases in-memory map, an iteration is still needed to prune the needed open > files and then run recordModification on each of them. So, it will not be a > strict O(1) with the above proposal. But, its going be a marginal increase > only as the new order will be of O(open_files_under_snap_dir). In order to > avoid HDFS Snapshots change in behavior for open files and avoid change in > time complexity, this improvement can be made under a new config > {{"dfs.namenode.snapshot.freeze.openfiles"}} which by default can be > {{false}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
[ https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11630: -- Attachment: HDFS-11630-branch-2.001.patch > TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds > --- > > Key: HDFS-11630 > URL: https://issues.apache.org/jira/browse/HDFS-11630 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch, > HDFS-11630-branch-2.001.patch > > > TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly > fails intermittently in Jenkins builds. > We need to wait for disk checker timeout to callback the > FutureCallBack#onFailure. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10999) Introduce separate stats for Replicated and Erasure Coded Blocks apart from the current Aggregated stats
[ https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-10999: -- Attachment: HDFS-10999.02.patch Thanks for the review [~tasanuma0829]. Separated the MBean registrations as you suggested, and updated a test to verify the same. Will take the JMX attribute renaming and other variable rename in the next revisions along with other other review comments. > Introduce separate stats for Replicated and Erasure Coded Blocks apart from > the current Aggregated stats > > > Key: HDFS-10999 > URL: https://issues.apache.org/jira/browse/HDFS-10999 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-nice-to-have, supportability > Attachments: HDFS-10999.01.patch, HDFS-10999.02.patch > > > Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic > term "low redundancy" to the old-fashioned "under replicated". But this term > is still being used in messages in several places, such as web ui, dfsadmin > and fsck. We should probably change them to avoid confusion. > File this jira to discuss it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11630) TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds
[ https://issues.apache.org/jira/browse/HDFS-11630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966775#comment-15966775 ] Hanisha Koneru commented on HDFS-11630: --- Thank you [~arpitagarwal] for committing the patch. > TestThrottledAsyncCheckerTimeout fails intermittently in Jenkins builds > --- > > Key: HDFS-11630 > URL: https://issues.apache.org/jira/browse/HDFS-11630 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11630.001.patch, HDFS-11630.002.patch > > > TestThrottledAsyncCheckerTimeout#testDiskCheckTimeoutInvokesOneCallbackOnly > fails intermittently in Jenkins builds. > We need to wait for disk checker timeout to callback the > FutureCallBack#onFailure. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11560) Expose slow disks via NameNode JMX
[ https://issues.apache.org/jira/browse/HDFS-11560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-11560: -- Attachment: HDFS-11560-branch-2.001.patch > Expose slow disks via NameNode JMX > -- > > Key: HDFS-11560 > URL: https://issues.apache.org/jira/browse/HDFS-11560 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11560.001.patch, HDFS-11560.002.patch, > HDFS-11560-branch-2.001.patch > > > Each Datanode exposes its slow disks through Datanode JMX. We can expose the > overall slow disks (among all datanodes) via the NameNode JMX. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11569) Ozone: Implement listKey function for KeyManager
[ https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11569: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) [~cheersyang] Thanks for taking care of this issue. I have committed this to the feature branch. I have changed the IOException to StorageContainerException while committing, so we don't need a follow up a JIRA. > Ozone: Implement listKey function for KeyManager > > > Key: HDFS-11569 > URL: https://issues.apache.org/jira/browse/HDFS-11569 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: HDFS-7240 > > Attachments: HDFS-11569-HDFS-7240.001.patch, > HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, > HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch, > HDFS-11569-HDFS-7240.006.patch, HDFS-11569-HDFS-7240.007.patch > > > List keys by prefix from a container. This will need to support pagination > for the purpose of small object support. So the listKey function returns > something like ListKeyResult, client can iterate the object to get pagination > results. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11649) Ozone: SCM: CLI: Add shell code placeholder classes
[ https://issues.apache.org/jira/browse/HDFS-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966763#comment-15966763 ] Hadoop QA commented on HDFS-11649: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 32s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11649 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863137/HDFS-11649-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 730740a35c41 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / cedacf1 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19065/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19065/testReport/ | | modules | C:
[jira] [Commented] (HDFS-11615) FSNamesystemLock metrics can be inaccurate due to millisecond precision
[ https://issues.apache.org/jira/browse/HDFS-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966751#comment-15966751 ] Erik Krogen commented on HDFS-11615: I would have suggested just adding an optional time unit parameter which would allow you to specify that a rate is e.g. measured in nanos to be output as "AvgTimeNanos"/"NumOps" but leave things as "AvgTime"/"NumOps" by default if no unit is specified. I see your point about automated tooling, though. "XxxNanosAvgTime" is reasonable but I'm hesitant to emit a metric called "XxxNanosNumOps"... > FSNamesystemLock metrics can be inaccurate due to millisecond precision > --- > > Key: HDFS-11615 > URL: https://issues.apache.org/jira/browse/HDFS-11615 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-11615.000.patch > > > Currently the {{FSNamesystemLock}} metrics created in HDFS-10872 track the > lock hold time using {{Timer.monotonicNow()}}, which has millisecond-level > precision. However, many of these operations hold the lock for less than a > millisecond, making these metrics inaccurate. We should instead use > {{System.nanoTime()}} for higher accuracy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11470) Ozone: SCM: CLI: Design SCM Command line interface
[ https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer reassigned HDFS-11470: --- Assignee: Anu Engineer (was: Xiaoyu Yao) > Ozone: SCM: CLI: Design SCM Command line interface > -- > > Key: HDFS-11470 > URL: https://issues.apache.org/jira/browse/HDFS-11470 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Anu Engineer > Attachments: storage-container-manager-cli-v001.pdf, > storage-container-manager-cli-v002.pdf > > > This jira the describes the SCM CLI. Since CLI will have lots of commands, we > will file other JIRAs for specific commands. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11582) Block Storage : add SCSI target access daemon
[ https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966739#comment-15966739 ] Chen Liang edited comment on HDFS-11582 at 4/12/17 10:18 PM: - Seems the v005 patch has some conflicts with the branch. The conflict was because the config key file in the branch has changed slightly. Update v006 patch to rebase. was (Author: vagarychen): Seems the v005 patch has some conflicts with the branch. The conflict was because the config key file has changed slightly. Update v006 patch to rebase. > Block Storage : add SCSI target access daemon > - > > Key: HDFS-11582 > URL: https://issues.apache.org/jira/browse/HDFS-11582 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11582-HDFS-7240.001.patch, > HDFS-11582-HDFS-7240.002.patch, HDFS-11582-HDFS-7240.003.patch, > HDFS-11582-HDFS-7240.004.patch, HDFS-11582-HDFS-7240.005.patch, > HDFS-11582-HDFS-7240.006.patch > > > This JIRA adds the daemon process that exposes SCSI target access. More > specifically, with this daemon process running, any OS with SCSI can talk to > this daemon process and treat CBlock volumes as SCSI targets, in this way the > user can mount the volume just like the POSIX manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11582) Block Storage : add SCSI target access daemon
[ https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11582: -- Attachment: HDFS-11582-HDFS-7240.006.patch Seems the v005 patch has some conflicts with the branch. The conflict was because the config key file has changed slightly. Update v006 patch to rebase. > Block Storage : add SCSI target access daemon > - > > Key: HDFS-11582 > URL: https://issues.apache.org/jira/browse/HDFS-11582 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11582-HDFS-7240.001.patch, > HDFS-11582-HDFS-7240.002.patch, HDFS-11582-HDFS-7240.003.patch, > HDFS-11582-HDFS-7240.004.patch, HDFS-11582-HDFS-7240.005.patch, > HDFS-11582-HDFS-7240.006.patch > > > This JIRA adds the daemon process that exposes SCSI target access. More > specifically, with this daemon process running, any OS with SCSI can talk to > this daemon process and treat CBlock volumes as SCSI targets, in this way the > user can mount the volume just like the POSIX manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11644) DFSStripedOutputStream should not implement Syncable
[ https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966723#comment-15966723 ] Andrew Wang commented on HDFS-11644: Looking at this, it's not pretty. FileSystem returns an FSDataOutputStream, which implements Syncable. Its implementation either does a real hflush, or just calls flush. See: {code:title=FSDataOutputStream} @Override // Syncable public void hflush() throws IOException { if (wrappedStream instanceof Syncable) { ((Syncable)wrappedStream).hflush(); } else { wrappedStream.flush(); } } {code} I don't understand how users can figure out if they're getting a real hflush. FSDataOutputStream implements Syncable, so you can't query with {{instanceof}}. There's currently no public way of querying the wrapped stream either. I think it was a mistake to add {{Syncable}} to FSDataOutputStream, we should have forced users to check with {{instanceof}} and cast it. I don't like changing DFSStripedOutputStream#hflush to simply call flush, since then HDFS users who turn on EC will silently stop getting real hflush/hsync. The current behavior of throwing an exception is safer. [~ste...@apache.org], any thoughts on this? I notice that output streams aren't covered by the FileSystem spec. This also relates to discussions about querying which features are supported by a FS. > DFSStripedOutputStream should not implement Syncable > > > Key: HDFS-11644 > URL: https://issues.apache.org/jira/browse/HDFS-11644 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-must-do > > FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, > calls hsync. Otherwise, it just calls flush. This is used, for instance, by > YARN's FileSystemTimelineWriter. > DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. > However, DFSStripedOS throws a runtime exception when the Syncable methods > are called. > We should refactor the inheritance structure so DFSStripedOS does not > implement Syncable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966725#comment-15966725 ] Junping Du commented on HDFS-11558: --- No worry. Mingliang. I will keep tracking. :) > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch, > HDFS-11558-branch-2.8.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset
[ https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966719#comment-15966719 ] Junping Du commented on HDFS-11163: --- You are welcome, [~cnauroth]! :) > Mover should move the file blocks to default storage once policy is unset > - > > Key: HDFS-11163 > URL: https://issues.apache.org/jira/browse/HDFS-11163 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, > HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, > HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, > HDFS-11163-branch-2.002.patch, HDFS-11163-branch-2.003.patch, > temp-YARN-6278.HDFS-11163.patch > > > HDFS-9534 added new API in FileSystem to unset the storage policy. Once > policy is unset blocks should move back to the default storage policy. > Currently mover is not moving file blocks which have zero storage ID > {code} > // currently we ignore files with unspecified storage policy > if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) { > return; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11470) Ozone: SCM: CLI: Design SCM Command line interface
[ https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966713#comment-15966713 ] Anu Engineer edited comment on HDFS-11470 at 4/12/17 10:03 PM: --- Just moving this JIRA to patch available state since most developers look at those JIRAs first. I will update the doc based on comments from [~xyao]. I will update this JIRA one more time with the latest design updates and then resolve this. I have also moved this JIRA work item to design since we have now realized that we should bring in the feature as many small independent JIRAs. I have tagged all JIRAs related to CLI as "Ozone: SCM: CLI: " was (Author: anu): Just moving this JIRA to patch available state since most developers look at those JIRAs first. I will update the doc based on comments from [~xyao]. > Ozone: SCM: CLI: Design SCM Command line interface > -- > > Key: HDFS-11470 > URL: https://issues.apache.org/jira/browse/HDFS-11470 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: storage-container-manager-cli-v001.pdf, > storage-container-manager-cli-v002.pdf > > > This jira the describes the SCM CLI. Since CLI will have lots of commands, we > will file other JIRAs for specific commands. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11470) Ozone: SCM: CLI: Design SCM Command line interface
[ https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966713#comment-15966713 ] Anu Engineer commented on HDFS-11470: - Just moving this JIRA to patch available state since most developers look at those JIRAs first. I will update the doc based on comments from [~xyao]. > Ozone: SCM: CLI: Design SCM Command line interface > -- > > Key: HDFS-11470 > URL: https://issues.apache.org/jira/browse/HDFS-11470 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: storage-container-manager-cli-v001.pdf, > storage-container-manager-cli-v002.pdf > > > This jira the describes the SCM CLI. Since CLI will have lots of commands, we > will file other JIRAs for specific commands. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11470) Ozone: SCM: CLI: Design SCM Command line interface
[ https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11470: Status: Patch Available (was: Open) > Ozone: SCM: CLI: Design SCM Command line interface > -- > > Key: HDFS-11470 > URL: https://issues.apache.org/jira/browse/HDFS-11470 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: storage-container-manager-cli-v001.pdf, > storage-container-manager-cli-v002.pdf > > > This jira the describes the SCM CLI. Since CLI will have lots of commands, we > will file other JIRAs for specific commands. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11470) Ozone: SCM: CLI: Design SCM Command line interface
[ https://issues.apache.org/jira/browse/HDFS-11470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11470: Summary: Ozone: SCM: CLI: Design SCM Command line interface (was: Ozone: SCM: Add SCM CLI) > Ozone: SCM: CLI: Design SCM Command line interface > -- > > Key: HDFS-11470 > URL: https://issues.apache.org/jira/browse/HDFS-11470 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: storage-container-manager-cli-v001.pdf, > storage-container-manager-cli-v002.pdf > > > This jira the describes the SCM CLI. Since CLI will have lots of commands, we > will file other JIRAs for specific commands. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11649) Ozone: SCM: CLI: Add shell code placeholder classes
[ https://issues.apache.org/jira/browse/HDFS-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11649: Summary: Ozone: SCM: CLI: Add shell code placeholder classes (was: Ozone : add SCM CLI shell code placeholder classes) > Ozone: SCM: CLI: Add shell code placeholder classes > > > Key: HDFS-11649 > URL: https://issues.apache.org/jira/browse/HDFS-11649 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11649-HDFS-7240.001.patch, > HDFS-11649-HDFS-7240.002.patch > > > HDFS-11470 has outlined how the SCM CLI would look like. Based on the design, > this JIRA adds the basic placeholder classes for all commands to be filled in. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11569) Ozone: Implement listKey function for KeyManager
[ https://issues.apache.org/jira/browse/HDFS-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966707#comment-15966707 ] Anu Engineer commented on HDFS-11569: - +1, Thanks for updating this patch and getting this fixed. I am sorry about the delay in code review. I had a very minor comment, you can file another JIRA to fix that if needed. In {{getKeyData}}: {code} catch (IOException e) { throw new IOException("Failed to parse key data from the bytes array.", e); } {code} Should we throw {{StorageContainerException}} here ? > Ozone: Implement listKey function for KeyManager > > > Key: HDFS-11569 > URL: https://issues.apache.org/jira/browse/HDFS-11569 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11569-HDFS-7240.001.patch, > HDFS-11569-HDFS-7240.002.patch, HDFS-11569-HDFS-7240.003.patch, > HDFS-11569-HDFS-7240.004.patch, HDFS-11569-HDFS-7240.005.patch, > HDFS-11569-HDFS-7240.006.patch, HDFS-11569-HDFS-7240.007.patch > > > List keys by prefix from a container. This will need to support pagination > for the purpose of small object support. So the listKey function returns > something like ListKeyResult, client can iterate the object to get pagination > results. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966698#comment-15966698 ] Mingliang Liu commented on HDFS-11558: -- Sorry I missed the call for cutting of {{branch-2.8.1}}. Thanks for cherry-picking. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch, > HDFS-11558-branch-2.8.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11634) Optimize BlockIterator when interating starts in the middle.
[ https://issues.apache.org/jira/browse/HDFS-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-11634: --- Attachment: HDFS-11634.004.patch Addressing checkstyle warnings. Thanks for the review [~shahrs87]. > Optimize BlockIterator when interating starts in the middle. > > > Key: HDFS-11634 > URL: https://issues.apache.org/jira/browse/HDFS-11634 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.6.5 >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko > Attachments: HDFS-11634.001.patch, HDFS-11634.002.patch, > HDFS-11634.003.patch, HDFS-11634.004.patch > > > {{BlockManager.getBlocksWithLocations()}} needs to iterate blocks from a > randomly selected {{startBlock}} index. It creates an iterator which points > to the first block and then skips all blocks until {{startBlock}}. It is > inefficient when DN has multiple storages. Instead of skipping blocks one by > one we can skip entire storages. Should be more efficient on average. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11384) Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike
[ https://issues.apache.org/jira/browse/HDFS-11384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-11384: --- Attachment: HDFS-11384.005.patch Addressing the checkstyle warning. Not seeing failures of TestBalancer.testBalancerWithStripedFile locally. Don't think it is related. > Add option for balancer to disperse getBlocks calls to avoid NameNode's > rpc.CallQueueLength spike > - > > Key: HDFS-11384 > URL: https://issues.apache.org/jira/browse/HDFS-11384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover >Affects Versions: 2.7.3 >Reporter: yunjiong zhao >Assignee: yunjiong zhao > Attachments: balancer.day.png, balancer.week.png, > HDFS-11384.001.patch, HDFS-11384.002.patch, HDFS-11384.003.patch, > HDFS-11384.004.patch, HDFS-11384.005.patch > > > When running balancer on hadoop cluster which have more than 3000 Datanodes > will cause NameNode's rpc.CallQueueLength spike. We observed this situation > could cause Hbase cluster failure due to RegionServer's WAL timeout. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11643) Balancer fencing fails when writing erasure coded lock file
[ https://issues.apache.org/jira/browse/HDFS-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966691#comment-15966691 ] Andrew Wang commented on HDFS-11643: Agree, at least for the balancer, I think for now we only need a new boolean parameter like you say. This also relates to HDFS-11644. If DFSStripedOutputStream no longer implements Syncable, then the Balancer's FSDataOutputStream#hflush will fallback to just doing a flush. For the balancer, I think we'd still prefer writing a replicated file and doing a real hflush, since otherwise {{write2IdFile}} won't function correctly. > Balancer fencing fails when writing erasure coded lock file > --- > > Key: HDFS-11643 > URL: https://issues.apache.org/jira/browse/HDFS-11643 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover, erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Priority: Critical > Labels: hdfs-ec-3.0-must-do > > At startup, the balancer writes its hostname to the lock file and calls > hflush(). hflush is not supported for EC files, so this fails when the entire > filesystem is erasure coded. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9260) Improve the performance and GC friendliness of NameNode startup and full block reports
[ https://issues.apache.org/jira/browse/HDFS-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966680#comment-15966680 ] Andrew Wang commented on HDFS-9260: --- One other question from our team, what's the typical block size on a Yahoo cluster? > Improve the performance and GC friendliness of NameNode startup and full > block reports > -- > > Key: HDFS-9260 > URL: https://issues.apache.org/jira/browse/HDFS-9260 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, namenode, performance >Affects Versions: 2.7.1 >Reporter: Staffan Friberg >Assignee: Staffan Friberg > Fix For: 3.0.0-alpha1 > > Attachments: FBR processing.png, HDFS-7435.001.patch, > HDFS-7435.002.patch, HDFS-7435.003.patch, HDFS-7435.004.patch, > HDFS-7435.005.patch, HDFS-7435.006.patch, HDFS-7435.007.patch, > HDFS-9260.008.patch, HDFS-9260.009.patch, HDFS-9260.010.patch, > HDFS-9260.011.patch, HDFS-9260.012.patch, HDFS-9260.013.patch, > HDFS-9260.014.patch, HDFS-9260.015.patch, HDFS-9260.016.patch, > HDFS-9260.017.patch, HDFS-9260.018.patch, HDFSBenchmarks2.zip, > HDFSBenchmarks.zip, HDFS Block and Replica Management 20151013.pdf > > > This patch changes the datastructures used for BlockInfos and Replicas to > keep them sorted. This allows faster and more GC friendly handling of full > block reports. > Would like to hear peoples feedback on this change. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11582) Block Storage : add SCSI target access daemon
[ https://issues.apache.org/jira/browse/HDFS-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966674#comment-15966674 ] Anu Engineer commented on HDFS-11582: - [~vagarychen] Can you please verify that you are able compile .005 patch on the current HDFS-7240. I seem to have a compiler error on my system. Just wanted to makes sure that it is not something that is specific to my machine. bq. CBlockClientServerProtocol.java:\[30,8\] class CBlockClientProtocol is public, should be declared in a file named CBlockClientProtocol.java Otherwise patch looks good to me. I will wait for your update before we proceed on this JIRA. > Block Storage : add SCSI target access daemon > - > > Key: HDFS-11582 > URL: https://issues.apache.org/jira/browse/HDFS-11582 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11582-HDFS-7240.001.patch, > HDFS-11582-HDFS-7240.002.patch, HDFS-11582-HDFS-7240.003.patch, > HDFS-11582-HDFS-7240.004.patch, HDFS-11582-HDFS-7240.005.patch > > > This JIRA adds the daemon process that exposes SCSI target access. More > specifically, with this daemon process running, any OS with SCSI can talk to > this daemon process and treat CBlock volumes as SCSI targets, in this way the > user can mount the volume just like the POSIX manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset
[ https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966659#comment-15966659 ] Chris Nauroth commented on HDFS-11163: -- [~djp], sorry I missed the email update on the 2.8.1 release plan. Thank you for cherry-picking it into the new branch-2.8.1. > Mover should move the file blocks to default storage once policy is unset > - > > Key: HDFS-11163 > URL: https://issues.apache.org/jira/browse/HDFS-11163 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, > HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, > HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, > HDFS-11163-branch-2.002.patch, HDFS-11163-branch-2.003.patch, > temp-YARN-6278.HDFS-11163.patch > > > HDFS-9534 added new API in FileSystem to unset the storage policy. Once > policy is unset blocks should move back to the default storage policy. > Currently mover is not moving file blocks which have zero storage ID > {code} > // currently we ignore files with unspecified storage policy > if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) { > return; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault
[ https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1590#comment-1590 ] Hadoop QA commented on HDFS-11530: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-11530 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863132/HDFS-11530.009.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 49ac862886cd 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a731271 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/19064/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/19064/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19064/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Use HDFS specific network topology to choose datanode in >
[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966628#comment-15966628 ] Junping Du commented on HDFS-11558: --- Hi [~liuml07], thanks for review and commit. As my email to hadoop dev list, we have 2.8.1 branch get cut-off for release since yesterday. Just merge the commit to branch-2.8.1 assume it is supposed to land in 2.8.1 release. Isn't it? > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch, > HDFS-11558-branch-2.8.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11648) Lazy construct the IIP pathname
[ https://issues.apache.org/jira/browse/HDFS-11648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966624#comment-15966624 ] Junping Du commented on HDFS-11648: --- Hi [~kihwal], thanks for review and commit. As my email to hadoop dev list, we have 2.8.1 branch get cut-off for release since yesterday. Just merge the commit to branch-2.8.1 assume it is supposed to land in 2.8.1 release. Isn't it? > Lazy construct the IIP pathname > > > Key: HDFS-11648 > URL: https://issues.apache.org/jira/browse/HDFS-11648 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11648.patch > > > The IIP pathname is a string constructed from the byte[][] components. If > the pathname will never be accessed, ex. processing listStatus children, > building the path is unnecessarily expensive. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset
[ https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966620#comment-15966620 ] Junping Du commented on HDFS-11163: --- Hi [~cnauroth], thanks for review and commit. As my email to hadoop dev list, we have 2.8.1 branch get cut-off for release since yesterday. Just merge the commit to branch-2.8.1 assume it is supposed to land in 2.8.1 release. Isn't it? > Mover should move the file blocks to default storage once policy is unset > - > > Key: HDFS-11163 > URL: https://issues.apache.org/jira/browse/HDFS-11163 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, > HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, > HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, > HDFS-11163-branch-2.002.patch, HDFS-11163-branch-2.003.patch, > temp-YARN-6278.HDFS-11163.patch > > > HDFS-9534 added new API in FileSystem to unset the storage policy. Once > policy is unset blocks should move back to the default storage policy. > Currently mover is not moving file blocks which have zero storage ID > {code} > // currently we ignore files with unspecified storage policy > if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) { > return; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault
[ https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966610#comment-15966610 ] Arpit Agarwal commented on HDFS-11530: -- Thanks for the great work [~linyiqun] and [~vagarychen]. I think we need some more stress testing/validation of the new network topology implementation before we make it the default. Here's my suggestion: {code} if (DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_DEFAULT.getName() .equals(conf.get(DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_KEY))) { networktopology = DFSNetworkTopology.getInstance(conf); } else { networktopology = NetworkTopology.getInstance(conf); } {code} Instead of using the new network topology whenever the BlockPlacementPolicy is {{BlockPlacementPolicyDefault}}, let's add a new configuration setting that allows choosing the NetworkTopology class. The rest of the changes in this patch can go in while we continue testing the new topology implementation. At some point in the future we can change the default via configuration. > Use HDFS specific network topology to choose datanode in > BlockPlacementPolicyDefault > > > Key: HDFS-11530 > URL: https://issues.apache.org/jira/browse/HDFS-11530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, > HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch, > HDFS-11530.006.patch, HDFS-11530.007.patch, HDFS-11530.008.patch, > HDFS-11530.009.patch > > > The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. > But this method is contained in new topology {{DFSNetworkTopology}} which is > specified for HDFS. We should update this and let > {{BlockPlacementPolicyDefault}} use the new way since the original way is > inefficient. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11504) Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs
[ https://issues.apache.org/jira/browse/HDFS-11504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966603#comment-15966603 ] Anu Engineer commented on HDFS-11504: - [~xyao] Thanks for the patch. Some comments below. * Something to think about , should we create a KeyManagerProtocol.proto file for these APIs , and move all Block APIs under KSM ? * StorageContainerLocation.proto -- {{ScmLocatedBlockProto}} Would it make sense to return key + pipeline ? * Do we need the AllocatedBlock#Builder Class. It has only 2 fields and they types are very different. * unused import in BlockManagerImpl {{org.apache.hadoop.hdfs.ozone.protocol.proto.ContainerProtos;}} * allocateBlock // TODO: handle block size greater than the container size // For now, allow only block size <= containerSize I don't think we should ever allow a block size larger than container size. * We should throw a proper exception here. {{Preconditions.checkArgument(size <= containerSize, "Unsupported block size");}} * throw new IOException("Unable to create block while in chill mode" Can we replace the IOException with StorageContainerException ? * How does the client differentiate if it needs to create a container or not ? Add a flag in the AllocatedBlock so that client is aware if the create has to be done by the client ? {code} if ((currentContainerName == null) || ((currentContainerName != null) && (currentContainerUsed >= containerSize))) { currentContainerName = UUID.randomUUID().toString(); pipeline = containerManager.allocateContainer(currentContainerName); currentContainerUsed = size; {code} * This prevents us from recovering a block when a node fails. We have to do the TODO mentioned here. The issue is that pipelines get rewritten all the time (the recovery code when datanodes fail), so we cannot update blocks.db to do that. We need to single source of truth and blocks.db should not cache this info. {code} // TODO: block->container mapping or block->pipeline mapping // block->container fits naturally into the layered design and // leave the container->pipeline mapping to container manager. {code} * ScmLocatedBlock.java: {{+ "; locations=" + locations}} locations is a list, so not sure if this print is going to print what you expect. > Ozone: SCM: Add AllocateBlock/DeleteBlock/GetBlock APIs > --- > > Key: HDFS-11504 > URL: https://issues.apache.org/jira/browse/HDFS-11504 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-11504-HDFS-7240.001.patch, > HDFS-11504-HDFS-7240.002.patch > > > The signature of the APIs are listed below. This allows SCM to > 1) allocateBlock for client and maintain the key->container mapping in level > DB in addition to the existing container to pipeline mapping in level DB. > 2) return the pipeline of a block based on the key. > 3) remove the block based on the key of the block. > {code} >allocateBlock(long size) > getBlock(key); > void deleteBlock(key); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11649) Ozone : add SCM CLI shell code placeholder classes
[ https://issues.apache.org/jira/browse/HDFS-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11649: -- Attachment: HDFS-11649-HDFS-7240.002.patch Post v002 patch to address the warnings. > Ozone : add SCM CLI shell code placeholder classes > -- > > Key: HDFS-11649 > URL: https://issues.apache.org/jira/browse/HDFS-11649 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-11649-HDFS-7240.001.patch, > HDFS-11649-HDFS-7240.002.patch > > > HDFS-11470 has outlined how the SCM CLI would look like. Based on the design, > this JIRA adds the basic placeholder classes for all commands to be filled in. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11402) HDFS Snapshots should capture point-in-time copies of OPEN files
[ https://issues.apache.org/jira/browse/HDFS-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966590#comment-15966590 ] Hadoop QA commented on HDFS-11402: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 4s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 49s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 788 unchanged - 4 fixed = 790 total (was 792) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestMissingBlocksAlert | | Timed out junit tests | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:612578f | | JIRA Issue | HDFS-11402 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863103/HDFS-11402.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 597321cc4deb 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d4c01dd | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/19063/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | unit |
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11558: - Fix Version/s: 2.8.1 > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch, > HDFS-11558-branch-2.8.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11644) DFSStripedOutputStream should not implement Syncable
[ https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy reassigned HDFS-11644: - Assignee: Manoj Govindassamy > DFSStripedOutputStream should not implement Syncable > > > Key: HDFS-11644 > URL: https://issues.apache.org/jira/browse/HDFS-11644 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-must-do > > FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, > calls hsync. Otherwise, it just calls flush. This is used, for instance, by > YARN's FileSystemTimelineWriter. > DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. > However, DFSStripedOS throws a runtime exception when the Syncable methods > are called. > We should refactor the inheritance structure so DFSStripedOS does not > implement Syncable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11644) DFSStripedOutputStream should not implement Syncable
[ https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966566#comment-15966566 ] Manoj Govindassamy commented on HDFS-11644: --- [~andrew.wang], sure, will work this on this. > DFSStripedOutputStream should not implement Syncable > > > Key: HDFS-11644 > URL: https://issues.apache.org/jira/browse/HDFS-11644 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Manoj Govindassamy > Labels: hdfs-ec-3.0-must-do > > FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, > calls hsync. Otherwise, it just calls flush. This is used, for instance, by > YARN's FileSystemTimelineWriter. > DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. > However, DFSStripedOS throws a runtime exception when the Syncable methods > are called. > We should refactor the inheritance structure so DFSStripedOS does not > implement Syncable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11644) DFSStripedOutputStream should not implement Syncable
[ https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966557#comment-15966557 ] Andrew Wang commented on HDFS-11644: [~manojg] want to pick this one up? > DFSStripedOutputStream should not implement Syncable > > > Key: HDFS-11644 > URL: https://issues.apache.org/jira/browse/HDFS-11644 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang > Labels: hdfs-ec-3.0-must-do > > FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, > calls hsync. Otherwise, it just calls flush. This is used, for instance, by > YARN's FileSystemTimelineWriter. > DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. > However, DFSStripedOS throws a runtime exception when the Syncable methods > are called. > We should refactor the inheritance structure so DFSStripedOS does not > implement Syncable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11649) Ozone : add SCM CLI shell code placeholder classes
[ https://issues.apache.org/jira/browse/HDFS-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966548#comment-15966548 ] Hadoop QA commented on HDFS-11649: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 25s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}129m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Result of integer multiplication cast to long in org.apache.hadoop.ozone.scm.cli.SCMCli.getScmClient(OzoneConfiguration) At SCMCli.java:to long in org.apache.hadoop.ozone.scm.cli.SCMCli.getScmClient(OzoneConfiguration) At SCMCli.java:[line 104] | | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.cblock.TestCBlockCLI | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.cblock.TestCBlockServer | | | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11649 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863102/HDFS-11649-HDFS-7240.001.patch | | Optional Tests | asflicense compile
[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966506#comment-15966506 ] Hudson commented on HDFS-10996: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11580 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11580/]) HDFS-10996. Ability to specify per-file EC policy at create time. (wang: rev a7312715a66dec5173c3a0a78dff4e0333e7f0b1) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestBlockPlacementPolicyRackFaultTolerant.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, > HDFS-10996-v3.patch, HDFS-10996-v4.patch, HDFS-10996-v5.patch, > HDFS-10996-v6.patch > > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966505#comment-15966505 ] Hudson commented on HDFS-11565: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11580 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11580/]) HDFS-11565. Use compact identifiers for built-in ECPolicies in (wang: rev 966b1b5b44103f3e3952da45da264d76fb3ac384) * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java > Use compact identifiers for built-in ECPolicies in HdfsFileStatus > - > > Key: HDFS-11565 > URL: https://issues.apache.org/jira/browse/HDFS-11565 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11565.001.patch, HDFS-11565.002.patch, > HDFS-11565.003.patch > > > Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo: > {quote} > From looking at the protos, one other question I had is about the overhead of > these protos when using the hardcoded policies. There are a bunch of strings > and ints, which can be kind of heavy since they're added to each > HdfsFileStatus. Should we make the built-in ones identified by purely an ID, > with these fully specified protos used for the pluggable policies? > {quote} > {quote} > Sounds like this could be considered separately because, either built-in > policies or plugged-in polices, the full meta info is maintained either by > the codes or in the fsimage persisted, so identifying them by purely an ID > should works fine. If agree, we could refactor the codes you mentioned above > separately. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11648) Lazy construct the IIP pathname
[ https://issues.apache.org/jira/browse/HDFS-11648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-11648: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.1 3.0.0-alpha3 Status: Resolved (was: Patch Available) Committed to trunk, branch-2 and branch-2.8. > Lazy construct the IIP pathname > > > Key: HDFS-11648 > URL: https://issues.apache.org/jira/browse/HDFS-11648 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11648.patch > > > The IIP pathname is a string constructed from the byte[][] components. If > the pathname will never be accessed, ex. processing listStatus children, > building the path is unnecessarily expensive. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11530) Use HDFS specific network topology to choose datanode in BlockPlacementPolicyDefault
[ https://issues.apache.org/jira/browse/HDFS-11530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11530: -- Attachment: HDFS-11530.009.patch Realized that there is a minor small bug in v008 patch, post v009 patch to address it. And thanks [~linyiqun] for sharing your thought! I do agree that we should definitely revisit tests to see whether they should be fixed based on what we found here. > Use HDFS specific network topology to choose datanode in > BlockPlacementPolicyDefault > > > Key: HDFS-11530 > URL: https://issues.apache.org/jira/browse/HDFS-11530 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Affects Versions: 3.0.0-alpha2 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11530.001.patch, HDFS-11530.002.patch, > HDFS-11530.003.patch, HDFS-11530.004.patch, HDFS-11530.005.patch, > HDFS-11530.006.patch, HDFS-11530.007.patch, HDFS-11530.008.patch, > HDFS-11530.009.patch > > > The work for {{chooseRandomWithStorageType}} has been merged in HDFS-11482. > But this method is contained in new topology {{DFSNetworkTopology}} which is > specified for HDFS. We should update this and let > {{BlockPlacementPolicyDefault}} use the new way since the original way is > inefficient. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11615) FSNamesystemLock metrics can be inaccurate due to millisecond precision
[ https://issues.apache.org/jira/browse/HDFS-11615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966470#comment-15966470 ] Andrew Wang commented on HDFS-11615: Hi Erik, What's your proposal for new names? I'm guessing that the monitoring tools out there already understand a Hadoop MutableRate, so changing the names (even if it's awkward) will mean more work for them. Chances are these monitoring tools also support displaying a separate human-friendly name, so again it might not be important for the raw JMX output to be very human readable. > FSNamesystemLock metrics can be inaccurate due to millisecond precision > --- > > Key: HDFS-11615 > URL: https://issues.apache.org/jira/browse/HDFS-11615 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-11615.000.patch > > > Currently the {{FSNamesystemLock}} metrics created in HDFS-10872 track the > lock hold time using {{Timer.monotonicNow()}}, which has millisecond-level > precision. However, many of these operations hold the lock for less than a > millisecond, making these metrics inaccurate. We should instead use > {{System.nanoTime()}} for higher accuracy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11651) Add a public API for specifying an EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-11651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966460#comment-15966460 ] Andrew Wang commented on HDFS-11651: One idea is that we add a new public interface to the builder for the EC APIs. Then, clients can safely use {{instanceof}} to check if the APIs are present. This is similar to {{Syncable}} on {{DFSOutputStream}}. > Add a public API for specifying an EC policy at create time > --- > > Key: HDFS-11651 > URL: https://issues.apache.org/jira/browse/HDFS-11651 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Andrew Wang > Labels: hdfs-ec-3.0-nice-to-have > > Follow-on work from HDFS-10996. We extended the create builder, but it still > requires casting to DistributedFileSystem to use, thus is not a public API. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966461#comment-15966461 ] Andrew Wang commented on HDFS-10996: FYI that I filed HDFS-11651 to expose this as a public API. I think we can do something similar to {{Syncable}}. > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, > HDFS-10996-v3.patch, HDFS-10996-v4.patch, HDFS-10996-v5.patch, > HDFS-10996-v6.patch > > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
[ https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11642: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup > > > Key: HDFS-11642 > URL: https://issues.apache.org/jira/browse/HDFS-11642 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-11642-HDFS-7240.001.patch > > > This was found in recent Jenkins run on HDFS-7240. > The cblock service RPC binding port (9810) was not cleaned up after test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11651) Add a public API for specifying an EC policy at create time
Andrew Wang created HDFS-11651: -- Summary: Add a public API for specifying an EC policy at create time Key: HDFS-11651 URL: https://issues.apache.org/jira/browse/HDFS-11651 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Affects Versions: 3.0.0-alpha3 Reporter: Andrew Wang Follow-on work from HDFS-10996. We extended the create builder, but it still requires casting to DistributedFileSystem to use, thus is not a public API. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11313) Segmented Block Reports
[ https://issues.apache.org/jira/browse/HDFS-11313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966458#comment-15966458 ] Daryn Sharp commented on HDFS-11313: Very clever design, but as I hinted before, I have strong concerns/objections to a sorted order requirement upon which this design appears predicated. Restricting the block data structures, by design, to some form of a tree does not scale well compared to other data structures. Ex. hashed or indexed. In fact it completely eliminates them. The answer may be much simpler. Back when the ipc handlers processed the IBR and FBRs, yielding the fsn lock was not possible, but awhile back I offloaded the BRs into a queue for processing by a dedicated thread. This reduced fsn lock contention (1 vs n-many waiters), and increased throughput via batching multiple BRs under the same write lock subject to a time limit. I think this may be extended to yield the lock during FBR processing. The serialized nature of BR processing removes the IBR races. There's probably just a few races to consider with systems like the decom manager, repl monitor, etc. > Segmented Block Reports > --- > > Key: HDFS-11313 > URL: https://issues.apache.org/jira/browse/HDFS-11313 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Affects Versions: 2.6.2 >Reporter: Konstantin Shvachko >Assignee: Vinitha Reddy Gankidi > Attachments: SegmentedBlockReports.pdf > > > Block reports from a single DataNode can be currently split into multiple > RPCs each reporting a single DataNode storage (disk). The reports are still > large since disks are getting bigger. Splitting blockReport RPCs into > multiple smaller calls would improve NameNode performance and overall HDFS > stability. > This was discussed in multiple jiras. Here the approach is to let NameNode > divide blockID space into segments and then ask DataNodes to report replicas > in a particular range of IDs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10996) Ability to specify per-file EC policy at create time
[ https://issues.apache.org/jira/browse/HDFS-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10996: --- Resolution: Fixed Fix Version/s: 3.0.0-alpha3 Status: Resolved (was: Patch Available) Thanks for the contribution Sammi, I've committed this to trunk! > Ability to specify per-file EC policy at create time > > > Key: HDFS-10996 > URL: https://issues.apache.org/jira/browse/HDFS-10996 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-10996-v1.patch, HDFS-10996-v2.patch, > HDFS-10996-v3.patch, HDFS-10996-v4.patch, HDFS-10996-v5.patch, > HDFS-10996-v6.patch > > > Based on discussion in HDFS-10971, it would be useful to specify the EC > policy when the file is created. This is useful for situations where app > requirements do not map nicely to the current directory-level policies. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-11650) Ozone: fix the consistently timeout test testUpgradeFromRel22Image
Chen Liang created HDFS-11650: - Summary: Ozone: fix the consistently timeout test testUpgradeFromRel22Image Key: HDFS-11650 URL: https://issues.apache.org/jira/browse/HDFS-11650 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Chen Liang Assignee: Chen Liang Recently, the test TestDFSUpgradeFromImage.testUpgradeFromRel22Image has been consistently failing due to timeout. JIRAs that encountered this include (but not limited to) HDFS-11642, HDFS-11635, HDFS-11062 and HDFS-11618. While this test passes in trunk. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11648) Lazy construct the IIP pathname
[ https://issues.apache.org/jira/browse/HDFS-11648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966454#comment-15966454 ] Hudson commented on HDFS-11648: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11579 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11579/]) HDFS-11648. Lazy construct the IIP pathname. Contributed by Daryn Sharp. (kihwal: rev d4c01dde49b3072317093344ca2cd569f0c6de08) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java > Lazy construct the IIP pathname > > > Key: HDFS-11648 > URL: https://issues.apache.org/jira/browse/HDFS-11648 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HDFS-11648.patch > > > The IIP pathname is a string constructed from the byte[][] components. If > the pathname will never be accessed, ex. processing listStatus children, > building the path is unnecessarily expensive. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException
[ https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966455#comment-15966455 ] Hudson commented on HDFS-11645: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11579 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11579/]) HDFS-11645. DataXceiver thread should log the actual error when getting (aengineer: rev abce61335678da753cd0f7965a236370274abee8) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java > DataXceiver thread should log the actual error when getting > InvalidMagicNumberException > --- > > Key: HDFS-11645 > URL: https://issues.apache.org/jira/browse/HDFS-11645 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.0.0-alpha1, 2.8.1 >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11645.001.patch > > > Currently, {{DataXceiver#run}} method only logs an error message when getting > an {{InvalidMagicNumberException}}. It should also log the actual exception. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966451#comment-15966451 ] Andrew Wang commented on HDFS-11565: Committed to trunk, thanks again Wei-chiu for reviewing. > Use compact identifiers for built-in ECPolicies in HdfsFileStatus > - > > Key: HDFS-11565 > URL: https://issues.apache.org/jira/browse/HDFS-11565 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11565.001.patch, HDFS-11565.002.patch, > HDFS-11565.003.patch > > > Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo: > {quote} > From looking at the protos, one other question I had is about the overhead of > these protos when using the hardcoded policies. There are a bunch of strings > and ints, which can be kind of heavy since they're added to each > HdfsFileStatus. Should we make the built-in ones identified by purely an ID, > with these fully specified protos used for the pluggable policies? > {quote} > {quote} > Sounds like this could be considered separately because, either built-in > policies or plugged-in polices, the full meta info is maintained either by > the codes or in the fsimage persisted, so identifying them by purely an ID > should works fine. If agree, we could refactor the codes you mentioned above > separately. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-11565: --- Resolution: Fixed Fix Version/s: 3.0.0-alpha3 Release Note: Some of the existing fields in ErasureCodingPolicyProto have changed from required to optional. For system EC policies, these fields are populated from hardcoded values. Status: Resolved (was: Patch Available) > Use compact identifiers for built-in ECPolicies in HdfsFileStatus > - > > Key: HDFS-11565 > URL: https://issues.apache.org/jira/browse/HDFS-11565 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11565.001.patch, HDFS-11565.002.patch, > HDFS-11565.003.patch > > > Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo: > {quote} > From looking at the protos, one other question I had is about the overhead of > these protos when using the hardcoded policies. There are a bunch of strings > and ints, which can be kind of heavy since they're added to each > HdfsFileStatus. Should we make the built-in ones identified by purely an ID, > with these fully specified protos used for the pluggable policies? > {quote} > {quote} > Sounds like this could be considered separately because, either built-in > policies or plugged-in polices, the full meta info is maintained either by > the codes or in the fsimage persisted, so identifying them by purely an ID > should works fine. If agree, we could refactor the codes you mentioned above > separately. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966445#comment-15966445 ] Andrew Wang commented on HDFS-11565: I'll fix the unused import on commit, thanks for reviewing Wei-chiu! > Use compact identifiers for built-in ECPolicies in HdfsFileStatus > - > > Key: HDFS-11565 > URL: https://issues.apache.org/jira/browse/HDFS-11565 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-11565.001.patch, HDFS-11565.002.patch, > HDFS-11565.003.patch > > > Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo: > {quote} > From looking at the protos, one other question I had is about the overhead of > these protos when using the hardcoded policies. There are a bunch of strings > and ints, which can be kind of heavy since they're added to each > HdfsFileStatus. Should we make the built-in ones identified by purely an ID, > with these fully specified protos used for the pluggable policies? > {quote} > {quote} > Sounds like this could be considered separately because, either built-in > policies or plugged-in polices, the full meta info is maintained either by > the codes or in the fsimage persisted, so identifying them by purely an ID > should works fine. If agree, we could refactor the codes you mentioned above > separately. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9260) Improve the performance and GC friendliness of NameNode startup and full block reports
[ https://issues.apache.org/jira/browse/HDFS-9260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966423#comment-15966423 ] Andrew Wang commented on HDFS-9260: --- Hi Daryn, I talked to our people who help run this large customer cluster. It's at about 350 million blocks, so a pretty good size, but also a lot denser than the last published stats I saw about the 4500-node Yahoo cluster. We don't have historical GC metrics going back a year when we put this into CDH, but they haven't seen anything abnormal in terms of GC. They were quite interested your balancer settings though, since we haven't seen it stressing the NN. Could you provide the following? {noformat} dfs.datanode.balance.bandwidthPerSec dfs.datanode.balance.max.concurrent.moves dfs.namenode.replication.work.multiplier.per.iteration dfs.namenode.replication.max-streams-hard-limit {noformat} I believe we're running it with mostly default settings like this: {noformat} hdfs balancer -Ddfs.datanode.balance.max.concurrent.moves=200 -threshold 10 {noformat} > Improve the performance and GC friendliness of NameNode startup and full > block reports > -- > > Key: HDFS-9260 > URL: https://issues.apache.org/jira/browse/HDFS-9260 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, namenode, performance >Affects Versions: 2.7.1 >Reporter: Staffan Friberg >Assignee: Staffan Friberg > Fix For: 3.0.0-alpha1 > > Attachments: FBR processing.png, HDFS-7435.001.patch, > HDFS-7435.002.patch, HDFS-7435.003.patch, HDFS-7435.004.patch, > HDFS-7435.005.patch, HDFS-7435.006.patch, HDFS-7435.007.patch, > HDFS-9260.008.patch, HDFS-9260.009.patch, HDFS-9260.010.patch, > HDFS-9260.011.patch, HDFS-9260.012.patch, HDFS-9260.013.patch, > HDFS-9260.014.patch, HDFS-9260.015.patch, HDFS-9260.016.patch, > HDFS-9260.017.patch, HDFS-9260.018.patch, HDFSBenchmarks2.zip, > HDFSBenchmarks.zip, HDFS Block and Replica Management 20151013.pdf > > > This patch changes the datastructures used for BlockInfos and Replicas to > keep them sorted. This allows faster and more GC friendly handling of full > block reports. > Would like to hear peoples feedback on this change. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11642) Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup
[ https://issues.apache.org/jira/browse/HDFS-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966421#comment-15966421 ] Chen Liang commented on HDFS-11642: --- Thanks [~xyao] for working on this! v001 patch LGTM, the failed tests are unrelated. Committed to the feature branch. > Block Storage: fix TestCBlockCLI and TestCBlockServerPersistence cleanup > > > Key: HDFS-11642 > URL: https://issues.apache.org/jira/browse/HDFS-11642 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HDFS-11642-HDFS-7240.001.patch > > > This was found in recent Jenkins run on HDFS-7240. > The cblock service RPC binding port (9810) was not cleaned up after test. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11635) Block Storage: Add metrics for Container Flushes.
[ https://issues.apache.org/jira/browse/HDFS-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11635: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Block Storage: Add metrics for Container Flushes. > - > > Key: HDFS-11635 > URL: https://issues.apache.org/jira/browse/HDFS-11635 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Attachments: HDFS-11635-HDFS-7240.001.patch > > > Metrics are needed for flushes of BlockIdBuffer to the DirtyLog file. > Counter introduced in this patch will keep track of both the number of > flushes as well as latency. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException
[ https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11645: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 Target Version/s: 3.0.0-alpha3 Status: Resolved (was: Patch Available) [~vagarychen] Thanks for the contribution. I have committed this to trunk. > DataXceiver thread should log the actual error when getting > InvalidMagicNumberException > --- > > Key: HDFS-11645 > URL: https://issues.apache.org/jira/browse/HDFS-11645 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.0.0-alpha1, 2.8.1 >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11645.001.patch > > > Currently, {{DataXceiver#run}} method only logs an error message when getting > an {{InvalidMagicNumberException}}. It should also log the actual exception. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long
[ https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-11558: - Attachment: HDFS-11558-branch-2.8.006.patch Posted branch-2.8 patch. > BPServiceActor thread name is too long > -- > > Key: HDFS-11558 > URL: https://issues.apache.org/jira/browse/HDFS-11558 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Tsz Wo Nicholas Sze >Assignee: Xiaobing Zhou >Priority: Minor > Fix For: 3.0.0-alpha3 > > Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, > HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch, > HDFS-11558.005.patch, HDFS-11558.006.patch, HDFS-11558-branch-2.006.patch, > HDFS-11558-branch-2.8.006.patch > > > Currently, the thread name looks like > {code} > 2017-03-20 18:32:22,022 [DataNode: > [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0, > > [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]] > heartbeating to localhost/127.0.0.1:51772] INFO ... > {code} > which contains the full path for each storage dir. It is unnecessarily long. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11163) Mover should move the file blocks to default storage once policy is unset
[ https://issues.apache.org/jira/browse/HDFS-11163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966369#comment-15966369 ] Surendra Singh Lilhore commented on HDFS-11163: --- Thanks [~cnauroth] for review and commit. Thanks [~vinayrpet] for review.. > Mover should move the file blocks to default storage once policy is unset > - > > Key: HDFS-11163 > URL: https://issues.apache.org/jira/browse/HDFS-11163 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover >Affects Versions: 2.8.0 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Fix For: 3.0.0-alpha3, 2.8.1 > > Attachments: HDFS-11163-001.patch, HDFS-11163-002.patch, > HDFS-11163-003.patch, HDFS-11163-004.patch, HDFS-11163-005.patch, > HDFS-11163-006.patch, HDFS-11163-007.patch, HDFS-11163-branch-2.001.patch, > HDFS-11163-branch-2.002.patch, HDFS-11163-branch-2.003.patch, > temp-YARN-6278.HDFS-11163.patch > > > HDFS-9534 added new API in FileSystem to unset the storage policy. Once > policy is unset blocks should move back to the default storage policy. > Currently mover is not moving file blocks which have zero storage ID > {code} > // currently we ignore files with unspecified storage policy > if (policyId == HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED) { > return; > } > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException
[ https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966358#comment-15966358 ] Anu Engineer commented on HDFS-11645: - [~hanishakoneru] Thanks for the review. +1 from me too. I will commit this shortly. > DataXceiver thread should log the actual error when getting > InvalidMagicNumberException > --- > > Key: HDFS-11645 > URL: https://issues.apache.org/jira/browse/HDFS-11645 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.0.0-alpha1, 2.8.1 >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Minor > Attachments: HDFS-11645.001.patch > > > Currently, {{DataXceiver#run}} method only logs an error message when getting > an {{InvalidMagicNumberException}}. It should also log the actual exception. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11565) Use compact identifiers for built-in ECPolicies in HdfsFileStatus
[ https://issues.apache.org/jira/browse/HDFS-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966353#comment-15966353 ] Wei-Chiu Chuang commented on HDFS-11565: +1 after fixing the checkstyle warning. Thanks Andrew! > Use compact identifiers for built-in ECPolicies in HdfsFileStatus > - > > Key: HDFS-11565 > URL: https://issues.apache.org/jira/browse/HDFS-11565 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-11565.001.patch, HDFS-11565.002.patch, > HDFS-11565.003.patch > > > Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo: > {quote} > From looking at the protos, one other question I had is about the overhead of > these protos when using the hardcoded policies. There are a bunch of strings > and ints, which can be kind of heavy since they're added to each > HdfsFileStatus. Should we make the built-in ones identified by purely an ID, > with these fully specified protos used for the pluggable policies? > {quote} > {quote} > Sounds like this could be considered separately because, either built-in > policies or plugged-in polices, the full meta info is maintained either by > the codes or in the fsimage persisted, so identifying them by purely an ID > should works fine. If agree, we could refactor the codes you mentioned above > separately. > {quote} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11648) Lazy construct the IIP pathname
[ https://issues.apache.org/jira/browse/HDFS-11648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966351#comment-15966351 ] Kihwal Lee commented on HDFS-11648: --- +1 simple, yet effective. > Lazy construct the IIP pathname > > > Key: HDFS-11648 > URL: https://issues.apache.org/jira/browse/HDFS-11648 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HDFS-11648.patch > > > The IIP pathname is a string constructed from the byte[][] components. If > the pathname will never be accessed, ex. processing listStatus children, > building the path is unnecessarily expensive. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11645) DataXceiver thread should log the actual error when getting InvalidMagicNumberException
[ https://issues.apache.org/jira/browse/HDFS-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966350#comment-15966350 ] Hanisha Koneru commented on HDFS-11645: --- Thank you [~vagarychen]. The patch LGTM. > DataXceiver thread should log the actual error when getting > InvalidMagicNumberException > --- > > Key: HDFS-11645 > URL: https://issues.apache.org/jira/browse/HDFS-11645 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.0.0-alpha1, 2.8.1 >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Minor > Attachments: HDFS-11645.001.patch > > > Currently, {{DataXceiver#run}} method only logs an error message when getting > an {{InvalidMagicNumberException}}. It should also log the actual exception. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org