[jira] [Updated] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12631: -- Attachment: HDFS-12631-HDFS-7240.002.patch Fix the findbugs issue. > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch, > HDFS-12631-HDFS-7240.002.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures
[ https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199826#comment-16199826 ] Yiqun Lin commented on HDFS-12583: -- Thanks for the review, [~cheersyang]. Will commit this shortly. > Ozone: Fix swallow exceptions which makes hard to debug failures > > > Key: HDFS-12583 > URL: https://issues.apache.org/jira/browse/HDFS-12583 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12583-HDFS-7240.001.patch, > HDFS-12583-HDFS-7240.002.patch, HDFS-12583-HDFS-7240.003.patch, > HDFS-12583-HDFS-7240.004.patch > > > There are some places that swallow exceptions that makes client hard to debug > the failure. For example, if we get xceiver client from xceiver client > manager error, client only gets the error info like this: > {noformat} > org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting > XceiverClient. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119) > at > com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243) > {noformat} > The error exception stack is missing. We should print the error log as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199819#comment-16199819 ] Hadoop QA commented on HDFS-12631: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 16s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 58s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}154m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Exceptional return value of java.io.File.mkdirs() ignored in new org.apache.hadoop.ozone.container.common.impl.ContainerStorageLocation(StorageLocation, Configuration) At ContainerStorageLocation.java:ignored in new org.apache.hadoop.ozone.container.common.impl.ContainerStorageLocation(StorageLocation, Configuration) At ContainerStorageLocation.java:[line 71] | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12631 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891411/HDFS-12631-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d98ce00e8c5f 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / f467565 | | Default Java | 1.8.0_144 | | findbugs |
[jira] [Commented] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199812#comment-16199812 ] Hadoop QA commented on HDFS-12497: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 5m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}102m 48s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}157m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12497 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891407/HDFS-12497.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0dadabf216ab 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3d04c00 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21634/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21634/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components:
[jira] [Updated] (HDFS-12620) Backporting HDFS-10467 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12620: --- Attachment: HDFS-10467-branch-2.003.patch > Backporting HDFS-10467 to branch-2 > -- > > Key: HDFS-12620 > URL: https://issues.apache.org/jira/browse/HDFS-12620 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: HDFS-10467-branch-2.001.patch, > HDFS-10467-branch-2.002.patch, HDFS-10467-branch-2.003.patch, > HDFS-10467-branch-2.patch, HDFS-12620-branch-2.000.patch > > > When backporting HDFS-10467, there are a few things that changed: > * {{bin\hdfs}} > * {{ClientProtocol}} > * Java 7 not supporting referencing functions > * {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is > {{org.mortbay.util.ajax.JSON}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199805#comment-16199805 ] Hadoop QA commented on HDFS-12411: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 2s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.ozone.scm.node.TestQueryNode | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12411 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891406/HDFS-12411-HDFS-7240.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc xml | | uname | Linux e1bca66bbe7c
[jira] [Updated] (HDFS-12620) Backporting HDFS-10467 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12620: --- Attachment: (was: HDFS-10467-branch-2.003.patch) > Backporting HDFS-10467 to branch-2 > -- > > Key: HDFS-12620 > URL: https://issues.apache.org/jira/browse/HDFS-12620 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Attachments: HDFS-10467-branch-2.001.patch, > HDFS-10467-branch-2.002.patch, HDFS-10467-branch-2.003.patch, > HDFS-10467-branch-2.patch, HDFS-12620-branch-2.000.patch > > > When backporting HDFS-10467, there are a few things that changed: > * {{bin\hdfs}} > * {{ClientProtocol}} > * Java 7 not supporting referencing functions > * {{org.eclipse.jetty.util.ajax.JSON}} in branch-2 is > {{org.mortbay.util.ajax.JSON}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12621) Inconsistency/confusion around ViewFileSystem.getDelagation
[ https://issues.apache.org/jira/browse/HDFS-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199793#comment-16199793 ] Suresh Srinivas commented on HDFS-12621: Added Mohammad to contributor list and assigned this jira to him per his request. > Inconsistency/confusion around ViewFileSystem.getDelagation > > > Key: HDFS-12621 > URL: https://issues.apache.org/jira/browse/HDFS-12621 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Mohammad Kamrul Islam >Assignee: Mohammad Kamrul Islam > > *Symptom*: > When a user invokes ViewFileSystem.getDelegationToken(String renewer), she > gets a "null". However, for any other file system, it returns a valid > delegation token. For a normal user, it is very confusing and it takes > substantial time to debug/find out an alternative. > *Root Cause:* > ViewFileSystem inherits the basic implementation from > FileSystem.getDelegationToken() that returns "_null_". The comments in the > source code indicates not to use it and instead use addDelegationTokens(). > However, it works fine DistributedFileSystem. > In short, the same client call is working for hdfs:// but not for viewfs://. > And there is no way of end-user to identify the root cause. This also creates > a lot of confusion for any service that are supposed to work for both viewfs > and hdfs. > *Possible Solution*: > _Option 1:_ Add a LOG.warn() with reasons/alternative before returning > "null" in the base class. > _Option 2:_ As done for other FS, ViewFileSystem can override the method with > a implementation by returning the token related to fs.defaultFS. In this > case, the defaultFS is something like "viewfs://..". We need to find out the > actual namenode and uses that to retrieve the delegation token. > _Option 3:_ Open for suggestion ? > *Last note:* My hunch is : there are very few users who may be using > viewfs:// with Kerberos. Therefore, it was not being exposed earlier. > I'm working on a good solution. Please add your suggestion. > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12621) Inconsistency/confusion around ViewFileSystem.getDelagation
[ https://issues.apache.org/jira/browse/HDFS-12621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas reassigned HDFS-12621: -- Assignee: Mohammad Kamrul Islam > Inconsistency/confusion around ViewFileSystem.getDelagation > > > Key: HDFS-12621 > URL: https://issues.apache.org/jira/browse/HDFS-12621 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Mohammad Kamrul Islam >Assignee: Mohammad Kamrul Islam > > *Symptom*: > When a user invokes ViewFileSystem.getDelegationToken(String renewer), she > gets a "null". However, for any other file system, it returns a valid > delegation token. For a normal user, it is very confusing and it takes > substantial time to debug/find out an alternative. > *Root Cause:* > ViewFileSystem inherits the basic implementation from > FileSystem.getDelegationToken() that returns "_null_". The comments in the > source code indicates not to use it and instead use addDelegationTokens(). > However, it works fine DistributedFileSystem. > In short, the same client call is working for hdfs:// but not for viewfs://. > And there is no way of end-user to identify the root cause. This also creates > a lot of confusion for any service that are supposed to work for both viewfs > and hdfs. > *Possible Solution*: > _Option 1:_ Add a LOG.warn() with reasons/alternative before returning > "null" in the base class. > _Option 2:_ As done for other FS, ViewFileSystem can override the method with > a implementation by returning the token related to fs.defaultFS. In this > case, the defaultFS is something like "viewfs://..". We need to find out the > actual namenode and uses that to retrieve the delegation token. > _Option 3:_ Open for suggestion ? > *Last note:* My hunch is : there are very few users who may be using > viewfs:// with Kerberos. Therefore, it was not being exposed earlier. > I'm working on a good solution. Please add your suggestion. > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12632) Ozone: OzoneFileSystem: Add contract tests to OzoneFileSystem
[ https://issues.apache.org/jira/browse/HDFS-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDFS-12632: Labels: ozoneMerge (was: ) > Ozone: OzoneFileSystem: Add contract tests to OzoneFileSystem > - > > Key: HDFS-12632 > URL: https://issues.apache.org/jira/browse/HDFS-12632 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: ozoneMerge > Fix For: HDFS-7240 > > > HDFS-11704 adds OzoneFileSytem aka (o3) to ozone. This jira will be used to > add ContractTest for the filesystem to Ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12614) FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured
[ https://issues.apache.org/jira/browse/HDFS-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199783#comment-16199783 ] Hadoop QA commented on HDFS-12614: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 18s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}141m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12614 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891402/HDFS-12614.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b4828850e295 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3d04c00 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21632/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21632/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21632/console | | Powered
[jira] [Commented] (HDFS-12635) Unnecessary exception declaration of the CellBuffers constructor
[ https://issues.apache.org/jira/browse/HDFS-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199759#comment-16199759 ] Hadoop QA commented on HDFS-12635: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 3m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:3d04c00 | | JIRA Issue | HDFS-12635 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891409/HDFS-12635.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8cb25eb5f1d9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3d04c00 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21636/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21636/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Unnecessary exception declaration of the CellBuffers constructor > > > Key:
[jira] [Commented] (HDFS-12632) Ozone: OzoneFileSystem: Add contract tests to OzoneFileSystem
[ https://issues.apache.org/jira/browse/HDFS-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199750#comment-16199750 ] Anu Engineer commented on HDFS-12632: - [~msingh] Thanks for filing this. This is likely the next JIRA for us to tackle for o3 file system. > Ozone: OzoneFileSystem: Add contract tests to OzoneFileSystem > - > > Key: HDFS-12632 > URL: https://issues.apache.org/jira/browse/HDFS-12632 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > > HDFS-11704 adds OzoneFileSytem aka (o3) to ozone. This jira will be used to > add ContractTest for the filesystem to Ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199746#comment-16199746 ] Anu Engineer commented on HDFS-12631: - +1, pending jenkins. > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures
[ https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199745#comment-16199745 ] Weiwei Yang commented on HDFS-12583: Looks good to me, +1 to v4 patch, thanks [~linyiqun]. > Ozone: Fix swallow exceptions which makes hard to debug failures > > > Key: HDFS-12583 > URL: https://issues.apache.org/jira/browse/HDFS-12583 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-12583-HDFS-7240.001.patch, > HDFS-12583-HDFS-7240.002.patch, HDFS-12583-HDFS-7240.003.patch, > HDFS-12583-HDFS-7240.004.patch > > > There are some places that swallow exceptions that makes client hard to debug > the failure. For example, if we get xceiver client from xceiver client > manager error, client only gets the error info like this: > {noformat} > org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting > XceiverClient. > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119) > at > com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243) > {noformat} > The error exception stack is missing. We should print the error log as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12620) Backporting HDFS-10467 to branch-2
[ https://issues.apache.org/jira/browse/HDFS-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199734#comment-16199734 ] Hadoop QA commented on HDFS-12620: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 25 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 46s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 48s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 372 unchanged - 0 fixed = 374 total (was 372) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 11 new + 624 unchanged - 0 fixed = 635 total (was 624) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}523m 2s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 5m 36s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}566m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestCheckpoint | | | hadoop.hdfs.server.namenode.TestStreamFile | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | | hadoop.hdfs.server.namenode.TestNameNodeHttpServer | | | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot | | | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens | | | hadoop.hdfs.server.namenode.TestNameNodeReconfigure | | | hadoop.hdfs.server.namenode.TestMetadataVersionOutput | | | hadoop.hdfs.server.namenode.TestAddBlock | | Timed out junit tests | org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend | | | org.apache.hadoop.hdfs.TestLeaseRecovery2 | | | org.apache.hadoop.hdfs.server.namenode.TestStartup | | | org.apache.hadoop.hdfs.TestEncryptionZonesWithHA | | | org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages | | | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement | | |
[jira] [Commented] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199733#comment-16199733 ] Hadoop QA commented on HDFS-12585: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 25s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 45s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 45s{color} | {color:green} root generated 0 new + 1274 unchanged - 1 fixed = 1274 total (was 1275) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 21s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 3 fixed = 2 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 42s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}226m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | The field org.apache.hadoop.conf.ConfServlet.loadDescriptionFromXml is transient but isn't set by deserialization In ConfServlet.java:but isn't set by deserialization In ConfServlet.java | | Failed junit tests | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.ozone.web.client.TestBuckets | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes ||
[jira] [Commented] (HDFS-12623) Add UT for the Test Command
[ https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199730#comment-16199730 ] Hadoop QA commented on HDFS-12623: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 75 new + 8 unchanged - 0 fixed = 83 total (was 8) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 14s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 81m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.net.TestDNS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12623 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891398/HDFS-12623.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9f88ed871f7b 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a297fb0 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21631/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21631/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21631/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console
[jira] [Resolved] (HDFS-11704) OzoneFileSystem: A Hadoop file system implementation for Ozone
[ https://issues.apache.org/jira/browse/HDFS-11704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh resolved HDFS-11704. -- Resolution: Fixed Fix Version/s: HDFS-7240 The functionality for ozoneFileSystem has been added through jiras HDFS-12199, HDFS-12425, HDFS-12572. There are certain minor work items which are needed to improve the functionality and metrics for OzoneFileSystem. The following jira will track the remaining issues :- HDFS-12636 - both rest/rpc backend should be supported using unified OzoneClient client HDFS-12634 - Add StorageStatistics to OzoneFileSystem HDFS-12632 - Add contract tests to OzoneFileSystem I am going to mark this jira as resolved for now. > OzoneFileSystem: A Hadoop file system implementation for Ozone > -- > > Key: HDFS-11704 > URL: https://issues.apache.org/jira/browse/HDFS-11704 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs/ozone, ozone >Reporter: Mingliang Liu >Assignee: Mukul Kumar Singh > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-11704-HDFS-7240.wip.patch, Ozone FileSystem > Design.pdf > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12636) Ozone: OzoneFileSystem: both rest/rpc backend should be supported using unified OzoneClient client
[ https://issues.apache.org/jira/browse/HDFS-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199722#comment-16199722 ] Mukul Kumar Singh commented on HDFS-12636: -- This jira will require HDFS-12490 to display filesystem modification time for status operations. > Ozone: OzoneFileSystem: both rest/rpc backend should be supported using > unified OzoneClient client > -- > > Key: HDFS-12636 > URL: https://issues.apache.org/jira/browse/HDFS-12636 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > > OzoneClient library provides a method to invoke both RPC as well as REST > based methods to ozone. This api will help in the improving both the > performance as well as the interface management in OzoneFileSystem. > This jira will be used to convert the REST based calls to use this new > unified client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199717#comment-16199717 ] Xiaoyu Yao edited comment on HDFS-12631 at 10/11/17 2:45 AM: - cc: [~anu], the patch will ensure the node report has the accurate DF/DU of container location when hdfs datanode and scm datanode runing side-by-side. was (Author: xyao): cc: [~anu] > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12631: -- Status: Patch Available (was: Open) > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199717#comment-16199717 ] Xiaoyu Yao commented on HDFS-12631: --- cc: [~anu] > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12636) Ozone: OzoneFileSystem: both rest/rpc backend should be supported using unified OzoneClient client
Mukul Kumar Singh created HDFS-12636: Summary: Ozone: OzoneFileSystem: both rest/rpc backend should be supported using unified OzoneClient client Key: HDFS-12636 URL: https://issues.apache.org/jira/browse/HDFS-12636 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Fix For: HDFS-7240 OzoneClient library provides a method to invoke both RPC as well as REST based methods to ozone. This api will help in the improving both the performance as well as the interface management in OzoneFileSystem. This jira will be used to convert the REST based calls to use this new unified client. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12631: -- Attachment: (was: HDFS-12631-HDFS-7240.001.patch) > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12633) Unnecessary exception declaration of the CellBuffers constructor
[ https://issues.apache.org/jira/browse/HDFS-12633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang resolved HDFS-12633. - Resolution: Duplicate > Unnecessary exception declaration of the CellBuffers constructor > > > Key: HDFS-12633 > URL: https://issues.apache.org/jira/browse/HDFS-12633 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Huafeng Wang >Assignee: Huafeng Wang >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12631: -- Attachment: HDFS-12631-HDFS-7240.001.patch > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12635) Unnecessary exception declaration of the CellBuffers constructor
[ https://issues.apache.org/jira/browse/HDFS-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HDFS-12635: Status: Patch Available (was: Open) > Unnecessary exception declaration of the CellBuffers constructor > > > Key: HDFS-12635 > URL: https://issues.apache.org/jira/browse/HDFS-12635 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Huafeng Wang >Assignee: Huafeng Wang >Priority: Minor > Attachments: HDFS-12635.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12631: -- Attachment: (was: HDFS-12631-HDFS-7240.001.patch) > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12631: -- Attachment: HDFS-12631-HDFS-7240.001.patch > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12635) Unnecessary exception declaration of the CellBuffers constructor
[ https://issues.apache.org/jira/browse/HDFS-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HDFS-12635: Attachment: HDFS-12635.001.patch > Unnecessary exception declaration of the CellBuffers constructor > > > Key: HDFS-12635 > URL: https://issues.apache.org/jira/browse/HDFS-12635 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Huafeng Wang >Assignee: Huafeng Wang >Priority: Minor > Attachments: HDFS-12635.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
[ https://issues.apache.org/jira/browse/HDFS-12631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12631: -- Attachment: HDFS-12631-HDFS-7240.001.patch > Ozone: ContainerStorageLocation#scmUsage should count only SCM usage > - > > Key: HDFS-12631 > URL: https://issues.apache.org/jira/browse/HDFS-12631 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm >Affects Versions: HDFS-7240 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12631-HDFS-7240.001.patch > > > Currently the ContainerStorageLocation#ContainerStorageLocation does not > resolve to the actual container prefix before passing it to > CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is > running along with HDFS datanode, the HDFS datanode usage will be included in > scmUsage too, which is incorrect. This ticket is opened to fix it. > {code} > this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) > .setConf(conf) > .setInitialUsed(loadScmUsed()) > .build(); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12635) Unnecessary exception declaration of the CellBuffers constructor
Huafeng Wang created HDFS-12635: --- Summary: Unnecessary exception declaration of the CellBuffers constructor Key: HDFS-12635 URL: https://issues.apache.org/jira/browse/HDFS-12635 Project: Hadoop HDFS Issue Type: Bug Reporter: Huafeng Wang Assignee: Huafeng Wang Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12633) Unnecessary exception declaration of the CellBuffers constructor
Huafeng Wang created HDFS-12633: --- Summary: Unnecessary exception declaration of the CellBuffers constructor Key: HDFS-12633 URL: https://issues.apache.org/jira/browse/HDFS-12633 Project: Hadoop HDFS Issue Type: Bug Reporter: Huafeng Wang Assignee: Huafeng Wang Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12634) Ozone: OzoneFileSystem: Add StorageStatistics to OzoneFileSystem
Mukul Kumar Singh created HDFS-12634: Summary: Ozone: OzoneFileSystem: Add StorageStatistics to OzoneFileSystem Key: HDFS-12634 URL: https://issues.apache.org/jira/browse/HDFS-12634 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Fix For: HDFS-7240 HADOOP-13065 add StorageStatistics to Hadoop, StorageStatistics provide a granular method to track metrics for each filesystem operation. This jira will be used to add StorageStatistics to OzoneFileSystem -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12632) Ozone: OzoneFileSystem: Add contract tests to OzoneFileSystem
Mukul Kumar Singh created HDFS-12632: Summary: Ozone: OzoneFileSystem: Add contract tests to OzoneFileSystem Key: HDFS-12632 URL: https://issues.apache.org/jira/browse/HDFS-12632 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Fix For: HDFS-7240 HDFS-11704 adds OzoneFileSytem aka (o3) to ozone. This jira will be used to add ContractTest for the filesystem to Ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
[ https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199714#comment-16199714 ] Hadoop QA commented on HDFS-12544: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 50s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 4 new + 390 unchanged - 4 fixed = 394 total (was 394) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 466 unchanged - 5 fixed = 467 total (was 471) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestLeaseRecoveryStriped | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12544 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891390/HDFS-12544.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 6f6425d51649 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 78af6cd | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21630/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle |
[jira] [Updated] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HDFS-12497: Attachment: HDFS-12497.004.patch > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Labels: flaky-test, hdfs-ec-3.0-must-do > Attachments: HDFS-12497.001.patch, HDFS-12497.002.patch, > HDFS-12497.003.patch, HDFS-12497.004.patch > > > We disabled this suite of tests in HDFS-12417 since they were very flaky. We > should fix these tests and re-enable them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12631) Ozone: ContainerStorageLocation#scmUsage should count only SCM usage
Xiaoyu Yao created HDFS-12631: - Summary: Ozone: ContainerStorageLocation#scmUsage should count only SCM usage Key: HDFS-12631 URL: https://issues.apache.org/jira/browse/HDFS-12631 Project: Hadoop HDFS Issue Type: Sub-task Components: scm Affects Versions: HDFS-7240 Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Currently the ContainerStorageLocation#ContainerStorageLocation does not resolve to the actual container prefix before passing it to CachingGetSpaceUsed for calculation. As a result of that, if SCM datanode is running along with HDFS datanode, the HDFS datanode usage will be included in scmUsage too, which is incorrect. This ticket is opened to fix it. {code} this.scmUsage = new CachingGetSpaceUsed.Builder().setPath(dataDir) .setConf(conf) .setInitialUsed(loadScmUsed()) .build(); {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199700#comment-16199700 ] Nandakumar commented on HDFS-12519: --- Thanks [~vagarychen] for the through review. bq. {{LeaseManager}}, how about adding a heap in addition to activeLeases? The problem with using {{PriorityQueue}} is that re-ordering doesn't happen if an element inside queue is modified. We have {{public void renew(long timeout)}} in {{Lease}} which modifies the timeout value based on which the ordering will be done. I agree on the performance gain we will get by adding {{PriorityQueue}}, in that case we might have to remove the support for lease renewal. bq. {{LeaseManager#LeaseMonitor#run}} About the interrupt in acquire, will the following case happen? Good point. >>will the lease monitor thread got stopped? When interrupt is called on a running thread the interrupt flag is set, the thread won't be stopped. We can check the interrupt flag and perform required operations. >>it seems it is possible that whenever leaseMonitor is checking timeout, an >>acquire call may come in and interrupt leaseMonitor... This can be handled, will update the patch to handle this situation. > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12519-HDFS-7240.000.patch, > HDFS-12519-HDFS-7240.001.patch, HDFS-12519-HDFS-7240.002.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Attachment: HDFS-12411-HDFS-7240.007.patch Fix the checkstyle issue. > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch, > HDFS-12411-HDFS-7240.006.patch, HDFS-12411-HDFS-7240.007.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12614) FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured
[ https://issues.apache.org/jira/browse/HDFS-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12614: -- Attachment: HDFS-12614.02.patch Thanks for the review [~daryn]. I had the same dilemma on whether to change the semantics for the root path component. I didn't see any functionalities failing because of this change though. But, I do concur that semantic change was riskier. Attached v02 patch to workaround the issue in {{FSPermissionChecker#getINodeAttrs()}} for the null root path component. Please take a look. I will track the other enhancement you talked about in a new jira. > FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider > configured > -- > > Key: HDFS-12614 > URL: https://issues.apache.org/jira/browse/HDFS-12614 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12614.01.patch, HDFS-12614.02.patch, > HDFS-12614.test.01.patch > > > When INodeAttributesProvider is configured, and when resolving path (like > "/") and checking for permission, the following code when working on > {{pathByNameArr}} throws NullPointerException. > {noformat} > private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx, > INode inode, int snapshotId) { > INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId); > if (getAttributesProvider() != null) { > String[] elements = new String[pathIdx + 1]; > for (int i = 0; i < elements.length; i++) { > elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); <=== > } > inodeAttrs = getAttributesProvider().getAttributes(elements, > inodeAttrs); > } > return inodeAttrs; > } > {noformat} > Looks like for paths like "/" where the split components based on delimiter > "/" can be null, the pathByNameArr array can have null elements and can throw > NPE. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12623) Add UT for the Test Command
[ https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] legend updated HDFS-12623: -- Status: Open (was: Patch Available) > Add UT for the Test Command > --- > > Key: HDFS-12623 > URL: https://issues.apache.org/jira/browse/HDFS-12623 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.1.0 >Reporter: legend > Attachments: HDFS-12623.001.patch, HDFS-12623.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12623) Add UT for the Test Command
[ https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] legend updated HDFS-12623: -- Status: Patch Available (was: Open) > Add UT for the Test Command > --- > > Key: HDFS-12623 > URL: https://issues.apache.org/jira/browse/HDFS-12623 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.1.0 >Reporter: legend > Attachments: HDFS-12623.001.patch, HDFS-12623.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12623) Add UT for the Test Command
[ https://issues.apache.org/jira/browse/HDFS-12623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] legend updated HDFS-12623: -- Attachment: HDFS-12623.001.patch > Add UT for the Test Command > --- > > Key: HDFS-12623 > URL: https://issues.apache.org/jira/browse/HDFS-12623 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 3.1.0 >Reporter: legend > Attachments: HDFS-12623.001.patch, HDFS-12623.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199670#comment-16199670 ] Hadoop QA commented on HDFS-12411: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 38s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 3 unchanged - 0 fixed = 7 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}178m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12411 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891366/HDFS-12411-HDFS-7240.006.patch | | Optional Tests |
[jira] [Commented] (HDFS-12547) Extend TestQuotaWithStripedBlocks with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199650#comment-16199650 ] Takanobu Asanuma commented on HDFS-12547: - Thanks for your help, [~andrew.wang]! > Extend TestQuotaWithStripedBlocks with a random EC policy > - > > Key: HDFS-12547 > URL: https://issues.apache.org/jira/browse/HDFS-12547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Fix For: 3.0.0 > > Attachments: HDFS-12547.1.patch, HDFS-12547.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12573) Divide the total block metrics into replica and ec
[ https://issues.apache.org/jira/browse/HDFS-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199644#comment-16199644 ] Takanobu Asanuma commented on HDFS-12573: - Thanks for reviewing and committing it, [~manojg]! > Divide the total block metrics into replica and ec > -- > > Key: HDFS-12573 > URL: https://issues.apache.org/jira/browse/HDFS-12573 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, metrics, namenode >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Fix For: 3.0.0 > > Attachments: HDFS-12573.1.patch, HDFS-12573.2.patch, > HDFS-12573.3.patch > > > Following HDFS-10999, let's separate total blocks metrics. It would be useful > for administrators. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12630) Rolling restart can create inconsistency between blockMap and corrupt replicas map
Andre Araujo created HDFS-12630: --- Summary: Rolling restart can create inconsistency between blockMap and corrupt replicas map Key: HDFS-12630 URL: https://issues.apache.org/jira/browse/HDFS-12630 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.6.0 Reporter: Andre Araujo After a NN rolling restart several HDFS files started showing block problems. Running FSCK for one of the files or for the directory that contained it would complete with a FAILED message but without any details of the failure. The NameNode log showed the following: {code} 2017-10-10 16:58:32,147 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: FSCK started by hdfs (auth:KERBEROS_SSL) from /10.92.128.4 for path /user/prod/data/file_20171010092201.csv at Tue Oct 10 16:58:32 PDT 2017 2017-10-10 16:58:32,147 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent number of corrupt replicas for blk_1941920008_1133195379 blockMap has 1 but corrupt replicas map has 2 2017-10-10 16:58:32,147 WARN org.apache.hadoop.hdfs.server.namenode.NameNode: Fsck on path '/user/prod/data/file_20171010092201.csv' FAILED java.lang.ArrayIndexOutOfBoundsException {code} After triggering a full block report for all the DNs the problem went away. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12547) Extend TestQuotaWithStripedBlocks with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199636#comment-16199636 ] Hudson commented on HDFS-12547: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13065 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13065/]) HDFS-12547. Extend TestQuotaWithStripedBlocks with a random EC policy. (wang: rev a297fb08866305860dc17813c3db5701e9515101) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestQuotaWithStripedBlocks.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestQuotaWithStripedBlocksWithRandomECPolicy.java > Extend TestQuotaWithStripedBlocks with a random EC policy > - > > Key: HDFS-12547 > URL: https://issues.apache.org/jira/browse/HDFS-12547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Fix For: 3.0.0 > > Attachments: HDFS-12547.1.patch, HDFS-12547.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12547) Extend TestQuotaWithStripedBlocks with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12547: --- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to trunk, thanks for the contribution [~tasanuma0829]! > Extend TestQuotaWithStripedBlocks with a random EC policy > - > > Key: HDFS-12547 > URL: https://issues.apache.org/jira/browse/HDFS-12547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Fix For: 3.0.0 > > Attachments: HDFS-12547.1.patch, HDFS-12547.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12547) Extend TestQuotaWithStripedBlocks with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199616#comment-16199616 ] Andrew Wang commented on HDFS-12547: Checkstyle looks good and javac warnings are unrelated, will commit shortly. > Extend TestQuotaWithStripedBlocks with a random EC policy > - > > Key: HDFS-12547 > URL: https://issues.apache.org/jira/browse/HDFS-12547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12547.1.patch, HDFS-12547.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199611#comment-16199611 ] Hadoop QA commented on HDFS-12553: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 294 unchanged - 16 fixed = 297 total (was 310) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}128m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}176m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12553 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891348/HDFS-12553.10.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux c0d735e33a0c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1123f8f | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21626/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit |
[jira] [Commented] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199598#comment-16199598 ] Hadoop QA commented on HDFS-12627: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 197 unchanged - 4 fixed = 199 total (was 201) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.federation.router.TestRouterRpc | | | hadoop.hdfs.TestMaintenanceState | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12627 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891345/HDFS-12627.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux e4552b7ec835 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1123f8f | | Default
[jira] [Updated] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
[ https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12544: -- Attachment: HDFS-12544.03.patch Thanks for the review [~yzhangal]. Attached v03 patch to address the following comments. Can you please review the latest patch? bq. It seems to make sense to include a new field snapshotDiffScopeDir in the SnapshotDiffInfo class, and initialize it as the constructor. Done. bq. suggest to move the checking from SnapshotManager%getSnapshottableAncestorDir to its caller, .. Done. bq. suggest to remove the method SnapshotManager%setSnapshotDiffAllowSnapRootDescendant, and use the config property to pass on the value to the cluster.. Done. bq. Nit. In SnapshotManager.java, change "directories" to "directory" in the following text... Done. > SnapshotDiff - support diff generation on any snapshot root descendant > directory > > > Key: HDFS-12544 > URL: https://issues.apache.org/jira/browse/HDFS-12544 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch, > HDFS-12544.03.patch > > > {noformat} > # hdfs snapshotDiff > > {noformat} > Using snapshot diff command, we can generate a diff report between any two > given snapshots under a snapshot root directory. The command today only > accepts the path that is a snapshot root. There are many deployments where > the snapshot root is configured at the higher level directory but the diff > report needed is only for a specific directory under the snapshot root. In > these cases, the diff report can be filtered for changes pertaining to the > directory we are interested in. But when the snapshot root directory is very > huge, the snapshot diff report generation can take minutes even if we are > interested to know the changes only in a small directory. So, it would be > highly performant if the diff report calculation can be limited to only the > interesting sub-directory of the snapshot root instead of the whole snapshot > root. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12612) DFSStripedOutputStream#close will throw if called a second time with a failed streamer
[ https://issues.apache.org/jira/browse/HDFS-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199592#comment-16199592 ] Hadoop QA commented on HDFS-12612: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12612 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891355/HDFS-12612.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6880ad3a2704 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 78af6cd | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Commented] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199571#comment-16199571 ] Andrew Wang commented on HDFS-12497: Thanks for the explanations. Could you split the InterruptedException change into a new JIRA, and also add the log back to this one? This will make the changes more self-contained. > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Labels: flaky-test, hdfs-ec-3.0-must-do > Attachments: HDFS-12497.001.patch, HDFS-12497.002.patch, > HDFS-12497.003.patch > > > We disabled this suite of tests in HDFS-12417 since they were very flaky. We > should fix these tests and re-enable them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199561#comment-16199561 ] Hadoop QA commented on HDFS-12585: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 49s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 39s{color} | {color:green} root generated 0 new + 1274 unchanged - 1 fixed = 1274 total (was 1275) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 11s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 3 fixed = 2 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 41s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 40s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}201m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-common-project/hadoop-common | | | The field org.apache.hadoop.conf.ConfServlet.loadDescriptionFromXml is transient but isn't set by deserialization In ConfServlet.java:but isn't set by deserialization In ConfServlet.java | | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem ||
[jira] [Updated] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12585: -- Attachment: HDFS-12585-HDFS-7240.03.patch [~anu], thanks for review. Removed log statement in patch v3. > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch, HDFS-12585-HDFS-7240.03.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199536#comment-16199536 ] Hadoop QA commented on HDFS-12627: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}153m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestSnapshotCommands | | | hadoop.cli.TestHDFSCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12627 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891339/HDFS-12627.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b80334507c32 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1123f8f | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21624/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21624/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21624/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT
[jira] [Updated] (HDFS-12629) NameNode UI should report total blocks count by type - replicated and erasure coded
[ https://issues.apache.org/jira/browse/HDFS-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12629: -- Attachment: NN_UI_Summary_BlockCount_BeforeFix.png > NameNode UI should report total blocks count by type - replicated and erasure > coded > --- > > Key: HDFS-12629 > URL: https://issues.apache.org/jira/browse/HDFS-12629 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: NN_UI_Summary_BlockCount_BeforeFix.png > > > Currently NameNode UI displays total files and directories and total blocks > in the cluster under the Summary tab. But, the total blocks count split by > type is missing. It would be good if we can display total blocks counts by > type (provided by HDFS-12573) along with the total block count. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12629) NameNode UI should report total blocks count by type - replicated and erasure coded
[ https://issues.apache.org/jira/browse/HDFS-12629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12629: -- Description: Currently NameNode UI displays total files and directories and total blocks in the cluster under the Summary tab. But, the total blocks count split by type is missing. It would be good if we can display total blocks counts by type (provided by HDFS-12573) along with the total block count. was:Currently NameNode UI displays total files and directories and total blocks in the cluster under the Summary tab. But, the total blocks count split by type is missing. It would be good if we can have these total blocks counts also displayed along with the total block count. > NameNode UI should report total blocks count by type - replicated and erasure > coded > --- > > Key: HDFS-12629 > URL: https://issues.apache.org/jira/browse/HDFS-12629 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: NN_UI_Summary_BlockCount_BeforeFix.png > > > Currently NameNode UI displays total files and directories and total blocks > in the cluster under the Summary tab. But, the total blocks count split by > type is missing. It would be good if we can display total blocks counts by > type (provided by HDFS-12573) along with the total block count. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12612) DFSStripedOutputStream#close will throw if called a second time with a failed streamer
[ https://issues.apache.org/jira/browse/HDFS-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199530#comment-16199530 ] Andrew Wang commented on HDFS-12612: Change looks pretty simple :) Did you dig up any info on the history of this code? I find it strange that it goes to all this work to explicitly re-throw the error like this. Even logging again is unnecessary, as long as it logs the first time. > DFSStripedOutputStream#close will throw if called a second time with a failed > streamer > -- > > Key: HDFS-12612 > URL: https://issues.apache.org/jira/browse/HDFS-12612 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12612.00.patch > > > Found while testing with Hive. We have a cluster with 2 DNs and the XOR-2-1 > policy. If you write a file and call close() twice, it throws this exception: > {noformat} > 17/10/04 16:02:14 WARN hdfs.DFSOutputStream: Cannot allocate parity > block(index=2, policy=XOR-2-1-1024k). Not enough datanodes? Exclude nodes=[] > ... > Caused by: java.io.IOException: Failed to get parity block, index=2 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:500) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:524) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > {noformat} > This is because in DFSStripedOutputStream#closeImpl, if the stream is closed, > we throw an exception if any of the striped streamers had an exception: > {code} > protected synchronized void closeImpl() throws IOException { > if (isClosed()) { > final MultipleIOException.Builder b = new MultipleIOException.Builder(); > for(int i = 0; i < streamers.size(); i++) { > final StripedDataStreamer si = getStripedDataStreamer(i); > try { > si.getLastException().check(true); > } catch (IOException e) { > b.add(e); > } > } > final IOException ioe = b.build(); > if (ioe != null) { > throw ioe; > } > return; > } > {code} > I think this is incorrect, since we only need to throw in this situation if > we have too many failed streamers. close should also be idempotent, so it > should throw the first time we call close if it's going to throw at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199529#comment-16199529 ] Anu Engineer commented on HDFS-12585: - LGTM, only one minor comment.Did you intend to do a debug log? {{LOG.info(gson.toJsonTree(filteredProperties).toString());}} > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12629) NameNode UI should report total blocks count by type - replicated and erasure coded
Manoj Govindassamy created HDFS-12629: - Summary: NameNode UI should report total blocks count by type - replicated and erasure coded Key: HDFS-12629 URL: https://issues.apache.org/jira/browse/HDFS-12629 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Affects Versions: 3.0.0-beta1 Reporter: Manoj Govindassamy Assignee: Manoj Govindassamy Currently NameNode UI displays total files and directories and total blocks in the cluster under the Summary tab. But, the total blocks count split by type is missing. It would be good if we can have these total blocks counts also displayed along with the total block count. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Attachment: HDFS-12411-HDFS-7240.006.patch > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch, > HDFS-12411-HDFS-7240.006.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Attachment: (was: HDFS-12572-HDFS-7240.006.patch) > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch, > HDFS-12411-HDFS-7240.006.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Attachment: HDFS-12572-HDFS-7240.006.patch > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch, > HDFS-12411-HDFS-7240.006.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Attachment: (was: HDFS-12411-HDFS-7240.006.patch) > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Attachment: HDFS-12411-HDFS-7240.006.patch Thanks [~yuanbo] for the review. I've attached a rebased new patch to address the feedbacks. bq. 1. ContainerData.java line 49 Can we rename the size as maxSize, it's a bit confusing. Done. bq. 2. ContainerReport.java I've seen a lot of representing classes which only contain getters and setters, I'm wondering whether these classes are really useful since we could use protobuf classes instead. I'd prefer to abandon this kind of classes but I'm opening on this point. This was introduced for different purposes. ContainerData was used by ContainerManagerImpl for persistence. ContainerStatus was used by ContainerManagerImpl, BlockDeletionService and container report. We can open a separate ticket to see if we could refactor to use protobuf msg directly. bq. 3.OzoneConfigKeys.java line 159 Can we add this parameter into ozone-default.xml? Done. bq. 4. I saw some refactor changes in your patch, can we file another JIRA to address it. But I'm good if you want to keep those changes. Choose to keep the change as most of the refactors are needed for container report such as the change in MetadataKeyFilters and also a few unit tests clean up. > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin
[ https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199469#comment-16199469 ] Arun Suresh commented on HDFS-12257: I am ok with porting this into the next 2.9.x release or 2.10 > Expose getSnapshottableDirListing as a public API in HdfsAdmin > -- > > Key: HDFS-12257 > URL: https://issues.apache.org/jira/browse/HDFS-12257 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch, > HDFS-12257.003.patch > > > Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no > programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we > should expose listing there as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12572) Ozone: OzoneFileSystem: delete/list status/rename/mkdir APIs
[ https://issues.apache.org/jira/browse/HDFS-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12572: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks [~msingh] for the contribution and all for the reviews. I've commit the patch to the feature branch. > Ozone: OzoneFileSystem: delete/list status/rename/mkdir APIs > > > Key: HDFS-12572 > URL: https://issues.apache.org/jira/browse/HDFS-12572 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12572-HDFS-7240.001.patch, > HDFS-12572-HDFS-7240.002.patch, HDFS-12572-HDFS-7240.003.patch, > HDFS-12572-HDFS-7240.004.patch, HDFS-12572-HDFS-7240.005.patch, > HDFS-12572-HDFS-7240.006.patch > > > This jira will add the delete/list status/rename/mkdir APIs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12628) libhdfs crashes on thread exit for JNI+libhdfs applications
[ https://issues.apache.org/jira/browse/HDFS-12628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe McDonnell updated HDFS-12628: - Attachment: jni-util-test2.cc > libhdfs crashes on thread exit for JNI+libhdfs applications > --- > > Key: HDFS-12628 > URL: https://issues.apache.org/jira/browse/HDFS-12628 > Project: Hadoop HDFS > Issue Type: Bug > Components: native >Affects Versions: 3.0.0-alpha3 >Reporter: Joe McDonnell >Priority: Critical > Attachments: jni-util-test2.cc > > > Impala uses libhdfs to access HDFS while also using JNI to run other Java > code. Impala currently relies on HDFS's getJNIEnv to get a JNIEnv to interact > with the process JVM (which is created by HDFS code). It uses this JNIEnv > even for code that is not related to HDFS. > In recent versions of HDFS, getJNIEnv is no longer visible in libhdfs due to > HDFS-7879. In HDFS-8474, the proposed solution was for Impala to write its > own equivalent (tracked by IMPALA-2029). After implementing an equivalent of > getJNIEnv (heavily based on HDFS code, but with distinct names), we are > seeing crashes in hdfsThreadDestructor() in threads that use both HDFS and > JNI codepaths. The crash shows up under concurrency and does not reproduce in > serial execution. > I have distilled it down to a simple testcase that reproduces the issue. It > creates a JVM in the main thread (which Impala does at startup), then spawns > multiple threads that do basic HDFS and JNI work. I have removed all but the > essential steps. > This blocks running Impala on any hadoop version past 2.7 (when HDFS-7879 was > merged). Note that exposing getJNIEnv should unblock Impala development if a > fix is not forthcoming. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12628) libhdfs crashes on thread exit for JNI+libhdfs applications
Joe McDonnell created HDFS-12628: Summary: libhdfs crashes on thread exit for JNI+libhdfs applications Key: HDFS-12628 URL: https://issues.apache.org/jira/browse/HDFS-12628 Project: Hadoop HDFS Issue Type: Bug Components: native Affects Versions: 3.0.0-alpha3 Reporter: Joe McDonnell Priority: Critical Impala uses libhdfs to access HDFS while also using JNI to run other Java code. Impala currently relies on HDFS's getJNIEnv to get a JNIEnv to interact with the process JVM (which is created by HDFS code). It uses this JNIEnv even for code that is not related to HDFS. In recent versions of HDFS, getJNIEnv is no longer visible in libhdfs due to HDFS-7879. In HDFS-8474, the proposed solution was for Impala to write its own equivalent (tracked by IMPALA-2029). After implementing an equivalent of getJNIEnv (heavily based on HDFS code, but with distinct names), we are seeing crashes in hdfsThreadDestructor() in threads that use both HDFS and JNI codepaths. The crash shows up under concurrency and does not reproduce in serial execution. I have distilled it down to a simple testcase that reproduces the issue. It creates a JVM in the main thread (which Impala does at startup), then spawns multiple threads that do basic HDFS and JNI work. I have removed all but the essential steps. This blocks running Impala on any hadoop version past 2.7 (when HDFS-7879 was merged). Note that exposing getJNIEnv should unblock Impala development if a fix is not forthcoming. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12573) Divide the total block metrics into replica and ec
[ https://issues.apache.org/jira/browse/HDFS-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199419#comment-16199419 ] Hudson commented on HDFS-12573: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13064 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13064/]) HDFS-12573. Divide the total blocks metrics into replicated and erasure (manojpec: rev 78af6cdc5359404139665d81447f28d26b7bb43b) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/ReplicatedBlocksMBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/ECBlockGroupsMBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java > Divide the total block metrics into replica and ec > -- > > Key: HDFS-12573 > URL: https://issues.apache.org/jira/browse/HDFS-12573 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, metrics, namenode >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Fix For: 3.0.0 > > Attachments: HDFS-12573.1.patch, HDFS-12573.2.patch, > HDFS-12573.3.patch > > > Following HDFS-10999, let's separate total blocks metrics. It would be useful > for administrators. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12572) Ozone: OzoneFileSystem: delete/list status/rename/mkdir APIs
[ https://issues.apache.org/jira/browse/HDFS-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199413#comment-16199413 ] Xiaoyu Yao commented on HDFS-12572: --- Thanks [~msingh] for the update. Patch v6 looks good to me, +1. I will commit it shortly. > Ozone: OzoneFileSystem: delete/list status/rename/mkdir APIs > > > Key: HDFS-12572 > URL: https://issues.apache.org/jira/browse/HDFS-12572 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12572-HDFS-7240.001.patch, > HDFS-12572-HDFS-7240.002.patch, HDFS-12572-HDFS-7240.003.patch, > HDFS-12572-HDFS-7240.004.patch, HDFS-12572-HDFS-7240.005.patch, > HDFS-12572-HDFS-7240.006.patch > > > This jira will add the delete/list status/rename/mkdir APIs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12573) Divide the total block metrics into replica and ec
[ https://issues.apache.org/jira/browse/HDFS-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12573: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Thanks for the patch contribution [~tasanuma0829]. Committed to trunk. > Divide the total block metrics into replica and ec > -- > > Key: HDFS-12573 > URL: https://issues.apache.org/jira/browse/HDFS-12573 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, metrics, namenode >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Fix For: 3.0.0 > > Attachments: HDFS-12573.1.patch, HDFS-12573.2.patch, > HDFS-12573.3.patch > > > Following HDFS-10999, let's separate total blocks metrics. It would be useful > for administrators. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12612) DFSStripedOutputStream#close will throw if called a second time with a failed streamer
[ https://issues.apache.org/jira/browse/HDFS-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12612: - Status: Patch Available (was: Open) > DFSStripedOutputStream#close will throw if called a second time with a failed > streamer > -- > > Key: HDFS-12612 > URL: https://issues.apache.org/jira/browse/HDFS-12612 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12612.00.patch > > > Found while testing with Hive. We have a cluster with 2 DNs and the XOR-2-1 > policy. If you write a file and call close() twice, it throws this exception: > {noformat} > 17/10/04 16:02:14 WARN hdfs.DFSOutputStream: Cannot allocate parity > block(index=2, policy=XOR-2-1-1024k). Not enough datanodes? Exclude nodes=[] > ... > Caused by: java.io.IOException: Failed to get parity block, index=2 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:500) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:524) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > {noformat} > This is because in DFSStripedOutputStream#closeImpl, if the stream is closed, > we throw an exception if any of the striped streamers had an exception: > {code} > protected synchronized void closeImpl() throws IOException { > if (isClosed()) { > final MultipleIOException.Builder b = new MultipleIOException.Builder(); > for(int i = 0; i < streamers.size(); i++) { > final StripedDataStreamer si = getStripedDataStreamer(i); > try { > si.getLastException().check(true); > } catch (IOException e) { > b.add(e); > } > } > final IOException ioe = b.build(); > if (ioe != null) { > throw ioe; > } > return; > } > {code} > I think this is incorrect, since we only need to throw in this situation if > we have too many failed streamers. close should also be idempotent, so it > should throw the first time we call close if it's going to throw at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12612) DFSStripedOutputStream#close will throw if called a second time with a failed streamer
[ https://issues.apache.org/jira/browse/HDFS-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12612: - Attachment: HDFS-12612.00.patch Add a test to verify the bug, and only logs remaining IOE from {{streams}} after close() being called. > DFSStripedOutputStream#close will throw if called a second time with a failed > streamer > -- > > Key: HDFS-12612 > URL: https://issues.apache.org/jira/browse/HDFS-12612 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12612.00.patch > > > Found while testing with Hive. We have a cluster with 2 DNs and the XOR-2-1 > policy. If you write a file and call close() twice, it throws this exception: > {noformat} > 17/10/04 16:02:14 WARN hdfs.DFSOutputStream: Cannot allocate parity > block(index=2, policy=XOR-2-1-1024k). Not enough datanodes? Exclude nodes=[] > ... > Caused by: java.io.IOException: Failed to get parity block, index=2 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:500) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:524) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > {noformat} > This is because in DFSStripedOutputStream#closeImpl, if the stream is closed, > we throw an exception if any of the striped streamers had an exception: > {code} > protected synchronized void closeImpl() throws IOException { > if (isClosed()) { > final MultipleIOException.Builder b = new MultipleIOException.Builder(); > for(int i = 0; i < streamers.size(); i++) { > final StripedDataStreamer si = getStripedDataStreamer(i); > try { > si.getLastException().check(true); > } catch (IOException e) { > b.add(e); > } > } > final IOException ioe = b.build(); > if (ioe != null) { > throw ioe; > } > return; > } > {code} > I think this is incorrect, since we only need to throw in this situation if > we have too many failed streamers. close should also be idempotent, so it > should throw the first time we call close if it's going to throw at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12553: -- Attachment: HDFS-12553.10.patch > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199354#comment-16199354 ] Ajay Kumar commented on HDFS-12627: --- [~arpitagarwal], thanks for having a look. Added fix for both classes in patch v2. > Typo in DFSAdmin > > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199353#comment-16199353 ] Arpit Agarwal commented on HDFS-12627: -- +1 pending Jenkins. > Typo in DFSAdmin > > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12627: -- Attachment: HDFS-12627.02.patch > Typo in DFSAdmin > > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch, HDFS-12627.02.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-206) Support for head in FSShell
[ https://issues.apache.org/jira/browse/HDFS-206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199333#comment-16199333 ] Hadoop QA commented on HDFS-206: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 21s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 33s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}203m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-206 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891300/HDFS-206.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 16152ca0ddce 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ec8bf9e | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Commented] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199323#comment-16199323 ] Arpit Agarwal commented on HDFS-12627: -- Thanks [~ajayydv]. The fix looks good. You'll also need to fix the following test files: # hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSnapshotCommands.java # hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml > Typo in DFSAdmin > > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12627: -- Status: Patch Available (was: Open) > Typo in DFSAdmin > > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12627) Typo in DFSAdmin
[ https://issues.apache.org/jira/browse/HDFS-12627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12627: -- Attachment: HDFS-12627.01.patch > Typo in DFSAdmin > > > Key: HDFS-12627 > URL: https://issues.apache.org/jira/browse/HDFS-12627 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Trivial > Attachments: HDFS-12627.01.patch > > > Typo in DFSAdmin: > System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199313#comment-16199313 ] Arpit Agarwal commented on HDFS-12542: -- Thanks for the contribution [~ajayydv]. The patch looks good, minor comments: # HttpFSFileSystem.java:1026 - duplicated line. # HttpFSFileSystem.java - Can you also address this pre-existing typo in the same javadoc you updated (patch should be path)? {code} @return the statuses of the files/directories in the given patch {code} # {{@throws IOException thrown if ...}} - thrown is unnecessary. # WebHDFSFileSystem.java:1505 - typo. patch should be path. > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12542.01.patch, HDFS-12542.02.patch > > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7174) Support for more efficient large directories
[ https://issues.apache.org/jira/browse/HDFS-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199310#comment-16199310 ] Hadoop QA commented on HDFS-7174: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s{color} | {color:red} HDFS-7174 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-7174 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12672576/HDFS-7174.new.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21623/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support for more efficient large directories > > > Key: HDFS-7174 > URL: https://issues.apache.org/jira/browse/HDFS-7174 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Critical > Labels: BB2015-05-TBR > Attachments: HDFS-7174.new.patch, HDFS-7174.patch, HDFS-7174.patch > > > When the number of children under a directory grows very large, insertion > becomes very costly. E.g. creating 1M entries takes 10s of minutes. This is > because the complexity of an insertion is O\(n\). As the size of a list > grows, the overhead grows n^2. (integral of linear function). It also causes > allocations and copies of big arrays. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12619) Do not catch and throw unchecked exceptions if IBRs fail to process
[ https://issues.apache.org/jira/browse/HDFS-12619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199308#comment-16199308 ] Xiao Chen commented on HDFS-12619: -- +1, thanks [~jojochuang]! > Do not catch and throw unchecked exceptions if IBRs fail to process > --- > > Key: HDFS-12619 > URL: https://issues.apache.org/jira/browse/HDFS-12619 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Minor > Attachments: HDFS-12619.001.patch > > > HDFS-9198 added the following code > {code:title=BlockManager#processIncrementalBlockReport} > public void processIncrementalBlockReport(final DatanodeID nodeID, > final StorageReceivedDeletedBlocks srdb) throws IOException { > ... > try { > processIncrementalBlockReport(node, srdb); > } catch (Exception ex) { > node.setForceRegistration(true); > throw ex; > } > } > {code} > In Apache Hadoop 2.7.x ~ 3.0, the code snippet is accepted by Java compiler. > However, when I attempted to backport it to a CDH5.3 release (based on Apache > Hadoop 2.5.0), the compiler complains the exception is unhandled, because the > method defines it throws IOException instead of Exception. > While the code compiles for Apache Hadoop 2.7.x ~ 3.0, I feel it is not a > good practice to catch an unchecked exception and then rethrow it. How about > rewriting it with a finally block and a conditional variable? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12627) Typo in DFSAdmin
Ajay Kumar created HDFS-12627: - Summary: Typo in DFSAdmin Key: HDFS-12627 URL: https://issues.apache.org/jira/browse/HDFS-12627 Project: Hadoop HDFS Issue Type: Bug Reporter: Ajay Kumar Assignee: Ajay Kumar Priority: Trivial Typo in DFSAdmin: System.out.println("Allowing *snaphot* on " + argv[1] + " succeeded"); -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12626) Ozone : delete open key entries that will no longer be closed
[ https://issues.apache.org/jira/browse/HDFS-12626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199303#comment-16199303 ] Chen Liang commented on HDFS-12626: --- There are definitely different ways to handle this, but the simpler solution I'm thinking is that, simply having a thread that periodically checks all the open key entries, if they have been there for a very long time, say, more than X hours, we treat them as dead entries and remove them. The tricky thing is what this X should be. Because if X is too small, then the client might still be writing after X hours. Basically, X should be longer than the time any single key write would take. Here I'm thinking of something like X = 24 hours. Because I don't see a use case where a single key write would take this long (maybe I'm wrong). Also, since I assume client crash is relatively rare so the number of dead entries shouldn't be too large, so it should be okay to reclaim them only once every day. Any thoughts? [~xyao] [~anu] > Ozone : delete open key entries that will no longer be closed > - > > Key: HDFS-12626 > URL: https://issues.apache.org/jira/browse/HDFS-12626 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > > HDFS-12543 introduced the notion of "open key" where when a key is opened, an > open key entry gets persisted, only after client calls a close will this > entry be made visible. One issue is that if the client does not call close > (e.g. failed), then that open key entry will never be deleted from meta data. > This JIRA tracks this issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12626) Ozone : delete open key entries that will no longer be closed
Chen Liang created HDFS-12626: - Summary: Ozone : delete open key entries that will no longer be closed Key: HDFS-12626 URL: https://issues.apache.org/jira/browse/HDFS-12626 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Chen Liang Assignee: Chen Liang HDFS-12543 introduced the notion of "open key" where when a key is opened, an open key entry gets persisted, only after client calls a close will this entry be made visible. One issue is that if the client does not call close (e.g. failed), then that open key entry will never be deleted from meta data. This JIRA tracks this issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12625) Reduce expense of deleting large directories
[ https://issues.apache.org/jira/browse/HDFS-12625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated HDFS-12625: -- Release Note: (was: Deletion of ~5M files on a large cluster jammed the NN for 52 seconds. The call queue overflowed and began rejecting clients. 14k calls were queued for which most clients timed out while the NN was hung. Tasks issuing the calls likely failed.) > Reduce expense of deleting large directories > > > Key: HDFS-12625 > URL: https://issues.apache.org/jira/browse/HDFS-12625 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.9.0, 2.8.1, 3.1.0 >Reporter: Eric Payne > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12625) Reduce expense of deleting large directories
[ https://issues.apache.org/jira/browse/HDFS-12625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated HDFS-12625: -- Description: Deletion of ~5M files on a large cluster jammed the NN for 52 seconds. The call queue overflowed and began rejecting clients. 14k calls were queued for which most clients timed out while the NN was hung. Tasks issuing the calls likely failed. > Reduce expense of deleting large directories > > > Key: HDFS-12625 > URL: https://issues.apache.org/jira/browse/HDFS-12625 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.9.0, 2.8.1, 3.1.0 >Reporter: Eric Payne > > Deletion of ~5M files on a large cluster jammed the NN for 52 seconds. The > call queue overflowed and began rejecting clients. 14k calls were queued for > which most clients timed out while the NN was hung. Tasks issuing the calls > likely failed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12625) Reduce expense of deleting large directories
[ https://issues.apache.org/jira/browse/HDFS-12625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199264#comment-16199264 ] Eric Payne commented on HDFS-12625: --- Features like snapshots have increased the difficulty to redesign deletes to be more asynchronous. A large delete should be profiled to target areas for optimization. Perhaps a max limit on files that can be deleted at once may mitigate issues. > Reduce expense of deleting large directories > > > Key: HDFS-12625 > URL: https://issues.apache.org/jira/browse/HDFS-12625 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.9.0, 2.8.1, 3.1.0 >Reporter: Eric Payne > > Deletion of ~5M files on a large cluster jammed the NN for 52 seconds. The > call queue overflowed and began rejecting clients. 14k calls were queued for > which most clients timed out while the NN was hung. Tasks issuing the calls > likely failed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12625) Reduce expense of deleting large directories
Eric Payne created HDFS-12625: - Summary: Reduce expense of deleting large directories Key: HDFS-12625 URL: https://issues.apache.org/jira/browse/HDFS-12625 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.8.1, 2.9.0, 3.1.0 Reporter: Eric Payne -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12585: -- Attachment: HDFS-12585-HDFS-7240.02.patch patch v2 to address checkstyle and findbug warnings. > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199221#comment-16199221 ] Chen Liang commented on HDFS-12519: --- This is actually a quite complex work, seems lots of thoughts have been put into, thanks [~nandakumar131] for working on this! Some comments: 1. {{LeaseManager}}, how about adding a heap in addition to activeLeases? (e.g. use {{PriorityQueue}}, where timeout is the comparing key). This way, acquire and release change from O(1) to O(log n) operation, but {{LeaseMonitor}} no longer needs to go through the entire activeLeases map all the time, but only needs to pop out all the expired ones from the top of the heap and then look at the top to determine how long it needs to sleep, making it from O( n) to O(log n) (for expired ones) and O(1) (for determine sleep time). What do you think? 2. {{LeaseManager#LeaseMonitor#run}} About the interrupt in acquire, will the following case happen? a. the lease monitor thread wakes up and is doing the timeout check (the for loop) b. the same time another acquire call comes in, which calls leaseMonitorThread.interrupt() If this can happen, will the lease monitor thread got stopped? because seems the catch InterruptedException is only for interrupt during sleep, but not for interrupt during the lease checking loop. Also, if {{acquire}} can interrupt leaseMonitor when it is doing the check, it seems it is possible that whenever leaseMonitor is checking timeout, an acquire call may come in and interrupt leaseMonitor, in this case LeaseMonitor may never be able to walk through all leases and remove the all the expired ones. But since this seems to be a bit too extreme corner case and only happens with large number of lease acquire calls, I'm fine with not handling it for now... > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12519-HDFS-7240.000.patch, > HDFS-12519-HDFS-7240.001.patch, HDFS-12519-HDFS-7240.002.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12593) Ozone: update Ratis to the latest snapshot
[ https://issues.apache.org/jira/browse/HDFS-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16199189#comment-16199189 ] Mukul Kumar Singh commented on HDFS-12593: -- Thanks for the updated patch [~szetszwo]. The latest patch looks good to me. +1, pending jenkins. > Ozone: update Ratis to the latest snapshot > -- > > Key: HDFS-12593 > URL: https://issues.apache.org/jira/browse/HDFS-12593 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-12593-HDFS-7240.20171005.patch, > HDFS-12593-HDFS-7240.20171006.patch, HDFS-12593-HDFS-7240.20171008.patch, > HDFS-12593-HDFS-7240.20171008b.patch, HDFS-12593-HDFS-7240.20171009.patch, > HDFS-12593-HDFS-7240.20171011.patch > > > Apache Ratis has quite a few bug fixes in the latest snapshot (7a5c3ea). Let > update to it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org