[jira] [Updated] (HDFS-12547) Extend TestQuotaWithStripedBlocks with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12547: Status: Patch Available (was: Open) > Extend TestQuotaWithStripedBlocks with a random EC policy > - > > Key: HDFS-12547 > URL: https://issues.apache.org/jira/browse/HDFS-12547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12547.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12547) Extend TestQuotaWithStripedBlocks with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12547: Attachment: HDFS-12547.1.patch Uploaded the initial patch. I checked all EC policies pass the unit tests in my local computer. > Extend TestQuotaWithStripedBlocks with a random EC policy > - > > Key: HDFS-12547 > URL: https://issues.apache.org/jira/browse/HDFS-12547 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12547.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180268#comment-16180268 ] Hadoop QA commented on HDFS-12495: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12495 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888989/HDFS-12495.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 71a7f48da142 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a2b31e3 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21353/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21353/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21353/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently > -- > > Key: HDFS-12495 > URL: https://issues.apache.org/jira/browse/HDFS-12495 >
[jira] [Commented] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180266#comment-16180266 ] Hadoop QA commented on HDFS-11613: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-client generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-11613 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889001/HDFS-11613-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1fff6146fcf9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 087c69b | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/21354/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21354/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21354/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop
[jira] [Created] (HDFS-12547) Extend TestQuotaWithStripedBlocks with a random EC policy
Takanobu Asanuma created HDFS-12547: --- Summary: Extend TestQuotaWithStripedBlocks with a random EC policy Key: HDFS-12547 URL: https://issues.apache.org/jira/browse/HDFS-12547 Project: Hadoop HDFS Issue Type: Sub-task Components: erasure-coding, test Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-11613: - Attachment: HDFS-11613-HDFS-7240.001.patch > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Attachments: HDFS-11613-HDFS-7240.001.patch > > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-11613: - Status: Patch Available (was: Reopened) > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > Attachments: HDFS-11613-HDFS-7240.001.patch > > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh reopened HDFS-11613: -- Found a findbugs issue, reopening > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh reassigned HDFS-11613: Assignee: Mukul Kumar Singh (was: Shashikant Banerjee) > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh >Priority: Blocker > Labels: ozoneMerge > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11613) Ozone: Cleanup findbugs issues
[ https://issues.apache.org/jira/browse/HDFS-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180243#comment-16180243 ] Mukul Kumar Singh commented on HDFS-11613: -- Found the following findbugs warning in {{RpcClient.java}} {code} Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.ozone.client.rpc.RpcClient.createBucket(String, String, BucketArgs) Bug type BX_UNBOXING_IMMEDIATELY_REBOXED (click for details) In class org.apache.hadoop.ozone.client.rpc.RpcClient In method org.apache.hadoop.ozone.client.rpc.RpcClient.createBucket(String, String, BucketArgs) Called method Boolean.valueOf(boolean) At RpcClient.java:[line 266] {code} > Ozone: Cleanup findbugs issues > -- > > Key: HDFS-11613 > URL: https://issues.apache.org/jira/browse/HDFS-11613 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Shashikant Banerjee >Priority: Blocker > Labels: ozoneMerge > > Some of the ozone checkins happened before Findbugs started running on test > files. This will cause issues when we attempt to merge with trunk. This jira > tracks cleaning up all Findbugs issue under ozone. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12545) Autotune NameNode RPC handler threads according to number of datanodes in cluster
[ https://issues.apache.org/jira/browse/HDFS-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180230#comment-16180230 ] Mukul Kumar Singh commented on HDFS-12545: -- Thanks for raising this issue [~ajayydv], In case namenode service rpc is enabled would this fix try to tune service-rpc prt as well ? > Autotune NameNode RPC handler threads according to number of datanodes in > cluster > - > > Key: HDFS-12545 > URL: https://issues.apache.org/jira/browse/HDFS-12545 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > > Autotune NameNode RPC handler threads according to number of datanodes in > cluster. Currently rpc handler are controlled by > {{dfs.namenode.handler.count}} on cluster start. Jira is to discuss best way > to auto tune it according to no of datanodes. Updating this to > {{max(dfs.namenode.handler.count, min(200,20 * log2(no of datanodes)))}} on > NameNode start is one possible way. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12524) Ozone: Record number of keys scanned and hinted for getRangeKVs call
[ https://issues.apache.org/jira/browse/HDFS-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180228#comment-16180228 ] Anu Engineer commented on HDFS-12524: - [~cheersyang] Thanks for the explanation, makes sense. +1. Please feel free to commit. > Ozone: Record number of keys scanned and hinted for getRangeKVs call > > > Key: HDFS-12524 > URL: https://issues.apache.org/jira/browse/HDFS-12524 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: logging, ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Labels: logging, ozoneMerge, performance > Attachments: HDFS-12524-HDFS-7240.001.patch > > > Add debug logging to record number of keys scanned and hinted for > {{getRangeKVs}} calls, this will be helpful to debug performance issues since > {{getRangeKVs}} is often the place where we get the lag. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12525) Ozone: OzoneClient: Verify bucket/volume name in create calls
[ https://issues.apache.org/jira/browse/HDFS-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180158#comment-16180158 ] Yiqun Lin commented on HDFS-12525: -- Thanks for the comment, [~nandakumar131]. The comment make sense to me. One thing I am concerning that if removing old {{OzoneRestClient}} work is the planning work of ozone merge? If not, I think it would be better to reuse function and make a minor change as [~anu] commented. That will let code look clear. {noformat} public static void verifyResourceName(String resName) throws IllegalArgumentException { OzoneClientUtils.verifyResourceName(resName); } {noformat} > Ozone: OzoneClient: Verify bucket/volume name in create calls > - > > Key: HDFS-12525 > URL: https://issues.apache.org/jira/browse/HDFS-12525 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12525-HDFS-7240.000.patch, > HDFS-12525-HDFS-7240.000.patch > > > The new OzoneClient API has to verify bucket/volume name during creation > call. Volume/Bucket name shouldn't support any special characters other {{.}} > and {{-}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180147#comment-16180147 ] Ajay Kumar edited comment on HDFS-12455 at 9/26/17 3:09 AM: test failures are not related. was (Author: ajayydv): test cases are not related. > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180147#comment-16180147 ] Ajay Kumar commented on HDFS-12455: --- test cases are not related. > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12495: - Attachment: HDFS-12495.002.patch Seems Jenkins can run now, attach the same patch to trigger a new running. > TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently > -- > > Key: HDFS-12495 > URL: https://issues.apache.org/jira/browse/HDFS-12495 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2 >Reporter: Eric Badger >Assignee: Eric Badger > Labels: flaky-test > Attachments: HDFS-12495.001.patch, HDFS-12495.002.patch > > > {noformat} > java.net.BindException: Problem binding to [localhost:36701] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:546) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:955) > at org.apache.hadoop.ipc.Server.(Server.java:2655) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152) > at > org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12540) Ozone: node status text reported by SCM is a bit confusing
[ https://issues.apache.org/jira/browse/HDFS-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang reassigned HDFS-12540: -- Assignee: Weiwei Yang > Ozone: node status text reported by SCM is a bit confusing > -- > > Key: HDFS-12540 > URL: https://issues.apache.org/jira/browse/HDFS-12540 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Trivial > > At present SCM UI displays node status like following > {noformat} > Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 > nodes have reported in. > {noformat} > this text is a bit confusing. UI retrieves status from > {{SCMNodeManager#getNodeStatus}}, related call is {{#getChillModeStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12540) Ozone: node status text reported by SCM is a bit confusing
[ https://issues.apache.org/jira/browse/HDFS-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12540: --- Summary: Ozone: node status text reported by SCM is a bit confusing (was: Ozone: node status text reported by SCM is a confusing) > Ozone: node status text reported by SCM is a bit confusing > -- > > Key: HDFS-12540 > URL: https://issues.apache.org/jira/browse/HDFS-12540 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Priority: Trivial > > At present SCM UI displays node status like following > {noformat} > Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 > nodes have reported in. > {noformat} > this text is a bit confusing. UI retrieves status from > {{SCMNodeManager#getNodeStatus}}, related call is {{#getChillModeStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180112#comment-16180112 ] Ajay Kumar commented on HDFS-12407: --- [~arpitagarwal] thanks for review and commit. > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 2.9.0, 3.0.0-beta1 > > Attachments: HDFS-12407.01.patch, HDFS-12407.02.patch, > HDFS-12407.03.patch, HDFS-12407.04.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12546) Ozone: DB listing operation performance improvement
Weiwei Yang created HDFS-12546: -- Summary: Ozone: DB listing operation performance improvement Key: HDFS-12546 URL: https://issues.apache.org/jira/browse/HDFS-12546 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Assignee: Weiwei Yang While investigating HDFS-12506, I found there are several {{getRangeKVs}} can be replaced by {{getSequentialRangeKVs}} to improve the performance. This JIRA is to track these improvements with sufficient tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12524) Ozone: Record number of keys scanned and hinted for getRangeKVs call
[ https://issues.apache.org/jira/browse/HDFS-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12524: --- Labels: logging ozoneMerge performance (was: ozoneMerge performance) > Ozone: Record number of keys scanned and hinted for getRangeKVs call > > > Key: HDFS-12524 > URL: https://issues.apache.org/jira/browse/HDFS-12524 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: logging, ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Labels: logging, ozoneMerge, performance > Attachments: HDFS-12524-HDFS-7240.001.patch > > > Add debug logging to record number of keys scanned and hinted for > {{getRangeKVs}} calls, this will be helpful to debug performance issues since > {{getRangeKVs}} is often the place where we get the lag. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12353) Modify Dfsuse percent of dfsadmin report inconsistent with Dfsuse percent of datanode reports.
[ https://issues.apache.org/jira/browse/HDFS-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180084#comment-16180084 ] steven-wugang commented on HDFS-12353: -- [~eddyxu] Hi,I have added a test for this patch.Can you help me review it?Thank you very much. > Modify Dfsuse percent of dfsadmin report inconsistent with Dfsuse percent of > datanode reports. > -- > > Key: HDFS-12353 > URL: https://issues.apache.org/jira/browse/HDFS-12353 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: steven-wugang >Assignee: steven-wugang > Attachments: HDFS-12353-1.patch, HDFS-12353-2.patch, > HDFS-12353-3.patch, HDFS-12353-4.patch, HDFS-12353.patch > > > use command "hdfs dfsadmin -report",as follows: > [hdfs@zhd2-3 sbin]$ hdfs dfsadmin -report > Configured Capacity: 157497375621120 (143.24 TB) > Present Capacity: 148541284228197 (135.10 TB) > DFS Remaining: 56467228499968 (51.36 TB) > DFS Used: 92074055728229 (83.74 TB) > DFS Used%: 61.99% > Under replicated blocks: 1 > Blocks with corrupt replicas: 3 > Missing blocks: 0 > Missing blocks (with replication factor 1): 0 > - > Live datanodes (4): > Name: 172.168.129.1:50010 (zhd2-1) > Hostname: zhd2-1 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 23560170107046 (21.43 TB) > Non DFS Used: 609684660058 (567.81 GB) > DFS Remaining: 15204489138176 (13.83 TB) > DFS Used%: 59.84% > DFS Remaining%: 38.62% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 36 > Last contact: Fri Aug 25 10:06:50 CST 2017 > Name: 172.168.129.3:50010 (zhd2-3) > Hostname: zhd2-3 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 23463410242057 (21.34 TB) > Non DFS Used: 620079140343 (577.49 GB) > DFS Remaining: 15290854522880 (13.91 TB) > DFS Used%: 59.59% > DFS Remaining%: 38.83% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 30 > Last contact: Fri Aug 25 10:06:50 CST 2017 > Name: 172.168.129.4:50010 (zhd2-4) > Hostname: zhd2-4 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 23908322375185 (21.74 TB) > Non DFS Used: 618808670703 (576.31 GB) > DFS Remaining: 14847212859392 (13.50 TB) > DFS Used%: 60.72% > DFS Remaining%: 37.71% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 38 > Last contact: Fri Aug 25 10:06:50 CST 2017 > Name: 172.168.129.2:50010 (zhd2-2) > Hostname: zhd2-2 > Decommission Status : Normal > Configured Capacity: 39374343905280 (35.81 TB) > DFS Used: 21142153003941 (19.23 TB) > Non DFS Used: 7107518921819 (6.46 TB) > DFS Remaining: 11124671979520 (10.12 TB) > DFS Used%: 53.70% > DFS Remaining%: 28.25% > Configured Cache Capacity: 60 (5.59 GB) > Cache Used: 0 (0 B) > Cache Remaining: 60 (5.59 GB) > Cache Used%: 0.00% > Cache Remaining%: 100.00% > Xceivers: 22 > Last contact: Fri Aug 25 10:06:50 CST 2017 > The first "DFS Used%" value on the top is DFS Used/Present Capacity,but "DFS > Used%" value in other live datanode reports is DFS Used/Configured Capacity. > The two calculation methods are inconsistent,misunderstanding may arise. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
[ https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180081#comment-16180081 ] Hadoop QA commented on HDFS-12454: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}168m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.hdfs.server.datanode.TestFsDatasetCache | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12454 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888968/HDFS-12454-HDFS-7240.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 35489ae97097 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | |
[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags
[ https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180080#comment-16180080 ] Weiwei Yang commented on HDFS-12513: Thanks for uploading the mockup [~ajayydv]. This is very useful, just like what ambari configuration tab offers. Is this an ozone specific task? {{ConfServlet}} is in {{hadoop-common}}, are you planing to get this done in {{hadoop-common}} on trunk then let ozone branch simply benefits from that? I noticed HDFS-12350 is done on trunk. > Ozone: Create UI page to show Ozone configs by tags > --- > > Key: HDFS-12513 > URL: https://issues.apache.org/jira/browse/HDFS-12513 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: OzoneSettings.png > > > Create UI page to show Ozone configs by tags -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180069#comment-16180069 ] Hadoop QA commented on HDFS-12386: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 5s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 15s{color} | {color:red} hadoop-hdfs-project generated 1 new + 404 unchanged - 0 fixed = 405 total (was 404) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 31s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 435 unchanged - 0 fixed = 438 total (was 435) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}401m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 9m 38s{color} | {color:red} The patch generated 240 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}454m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing | | | hadoop.hdfs.server.namenode.TestAllowFormat | | | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager | | Timed out junit tests | org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend | | | org.apache.hadoop.hdfs.server.blockmanagement.TestSlowDiskTracker | | | org.apache.hadoop.hdfs.TestSmallBlock | | | org.apache.hadoop.hdfs.TestDFSStartupVersions | | | org.apache.hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling | | | org.apache.hadoop.hdfs.TestDatanodeRegistration | | | org.apache.hadoop.hdfs.server.namenode.snapshot.TestGetContentSummaryWithSnapshot | | | org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager | | | org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport | |
[jira] [Commented] (HDFS-12524) Ozone: Record number of keys scanned and hinted for getRangeKVs call
[ https://issues.apache.org/jira/browse/HDFS-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180058#comment-16180058 ] Weiwei Yang commented on HDFS-12524: Hi [~anu] That's correct, {{keysScanned}} is the count how many keys have been scanned in this getRangeCall, and {{keysHinted}} is the count how many keys where the given prefix matched. This can help to debug the issue when getRangeKVs call is slow, for example if scanned number is much more than hinted number, that means there is some perf issue needs to be fixed. Please let me know if this makes sense to you. Thanks > Ozone: Record number of keys scanned and hinted for getRangeKVs call > > > Key: HDFS-12524 > URL: https://issues.apache.org/jira/browse/HDFS-12524 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: logging, ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Labels: ozoneMerge, performance > Attachments: HDFS-12524-HDFS-7240.001.patch > > > Add debug logging to record number of keys scanned and hinted for > {{getRangeKVs}} calls, this will be helpful to debug performance issues since > {{getRangeKVs}} is often the place where we get the lag. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180057#comment-16180057 ] Hadoop QA commented on HDFS-12543: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 12s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 4s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 38s{color} | {color:red} hadoop-hdfs-project generated 1 new + 470 unchanged - 1 fixed = 471 total (was 471) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 38 new + 47 unchanged - 9 fixed = 85 total (was 56) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}148m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryption | | | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.ozone.web.TestOzoneRestWithMiniCluster | | | hadoop.hdfs.TestPread | | | hadoop.ozone.ksm.TestChunkStreams | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12543 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888967/HDFS-12543-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux f08d9de871ac 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Created] (HDFS-12545) Autotune NameNode RPC handler threads according to number of datanodes in cluster
Ajay Kumar created HDFS-12545: - Summary: Autotune NameNode RPC handler threads according to number of datanodes in cluster Key: HDFS-12545 URL: https://issues.apache.org/jira/browse/HDFS-12545 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ajay Kumar Assignee: Ajay Kumar Autotune NameNode RPC handler threads according to number of datanodes in cluster. Currently rpc handler are controlled by {{dfs.namenode.handler.count}} on cluster start. Jira is to discuss best way to auto tune it according to no of datanodes. Updating this to {{max(dfs.namenode.handler.count, min(200,20 * log2(no of datanodes)))}} on NameNode start is one possible way. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory
Manoj Govindassamy created HDFS-12544: - Summary: SnapshotDiff - support diff generation on any snapshot root descendant directory Key: HDFS-12544 URL: https://issues.apache.org/jira/browse/HDFS-12544 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Affects Versions: 3.0.0-beta1 Reporter: Manoj Govindassamy Assignee: Manoj Govindassamy {noformat} # hdfs snapshotDiff {noformat} Using snapshot diff command, we can generate a diff report between any two given snapshots under a snapshot root directory. The command today only accepts the path that is a snapshot root. There are many deployments where the snapshot root is configured at the higher level directory but the diff report needed is only for a specific directory under the snapshot root. In these cases, the diff report can be filtered for changes pertaining to the directory we are interested in. But when the snapshot root directory is very huge, the snapshot diff report generation can take minutes even if we are interested to know the changes only in a small directory. So, it would be highly performant if the diff report calculation can be limited to the snapshot directory only instead of the whole snapshot root. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1618#comment-1618 ] Hadoop QA commented on HDFS-12455: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 32s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 43s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}193m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestDNS | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12455 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888951/HDFS-12455.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 4fe1cf58dba2 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
[jira] [Commented] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync
[ https://issues.apache.org/jira/browse/HDFS-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179997#comment-16179997 ] Ajay Kumar commented on HDFS-12238: --- [~anu],[~xyao],[~vagarychen] , [~msingh] thanks for review. > Ozone: Add valid trace ID check in sendCommandAsync > --- > > Key: HDFS-12238 > URL: https://issues.apache.org/jira/browse/HDFS-12238 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Ajay Kumar > Labels: newbie > Fix For: HDFS-7240 > > Attachments: HDFS-12238-HDFS-7240.01.patch, > HDFS-12238-HDFS-7240.02.patch, HDFS-12238-HDFS-7240.03.patch > > > In the function {{XceiverClientHandler#sendCommandAsync}} we should add a > check > {code} > if(StringUtils.isEmpty(request.getTraceID())) { > throw new IllegalArgumentException("Invalid trace ID"); > } > {code} > To ensure that ozone clients always send a valid trace ID. However, when you > do that a set of current tests that do add a valid trace ID will fail. So we > need to fix these tests too. > {code} > TestContainerMetrics.testContainerMetrics > TestOzoneContainer.testBothGetandPutSmallFile > TestOzoneContainer.testCloseContainer > TestOzoneContainer.testOzoneContainerViaDataNode > TestOzoneContainer.testXcieverClientAsync > TestOzoneContainer.testCreateOzoneContainer > TestOzoneContainer.testDeleteContainer > TestContainerServer.testClientServer > TestContainerServer.testClientServerWithContainerDispatcher > TestKeys.testPutAndGetKeyWithDnRestart > {code} > This is based on a comment from [~vagarychen] in HDFS-11580. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12262) Ozone: KSM: Reduce default handler thread count from 200
[ https://issues.apache.org/jira/browse/HDFS-12262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179994#comment-16179994 ] Ajay Kumar commented on HDFS-12262: --- [~xyao],[~anu], thanks for review and commit. > Ozone: KSM: Reduce default handler thread count from 200 > > > Key: HDFS-12262 > URL: https://issues.apache.org/jira/browse/HDFS-12262 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-12262-HDFS-7240.01.patch > > > KSMConfigKeys#OZONE_KSM_HANDLER_COUNT_DEFAULT is currently 200. It should be > a much smaller value like (20) by default and customized according to the > size of cluster such as the 20Log(N) where N is the size of the cluster. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12221) Replace xerces in XmlEditsVisitor
[ https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179993#comment-16179993 ] Ajay Kumar commented on HDFS-12221: --- [~andrew.wang],[~eddyxu] thanks for review and commit. > Replace xerces in XmlEditsVisitor > -- > > Key: HDFS-12221 > URL: https://issues.apache.org/jira/browse/HDFS-12221 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Ajay Kumar > Fix For: 3.0.0-beta1 > > Attachments: editsStored, fsimage_hdfs-12221.xml, > HDFS-12221.01.patch, HDFS-12221.02.patch, HDFS-12221.03.patch, > HDFS-12221.04.patch, HDFS-12221.05.patch > > > XmlEditsVisitor should use new XML capability in the newer JDK, to make JAR > shading easier (HADOOP-14672) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Status: Patch Available (was: Open) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.002.patch, HDFS-12386-branch-2.8.001.patch, > HDFS-12386-branch-2.8.002.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: HDFS-12386-branch-2.002.patch HDFS-12386-branch-2.8.002.patch > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.002.patch, HDFS-12386-branch-2.8.001.patch, > HDFS-12386-branch-2.8.002.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Status: Open (was: Patch Available) Precommit failed with some unknown reason while processing branch-2 version of patch. Will attach branch-2 and branch-2.8 version of patch and resubmit. > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.8.001.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: (was: HDFS-12386-branch-2.002.patch) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.8.001.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: (was: HDFS-12386-branch-2.8.002.patch) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.8.001.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179978#comment-16179978 ] Hadoop QA commented on HDFS-12455: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 34s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 16s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 36s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}179m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12455 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888947/HDFS-12455.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 4b0a258809b5 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-12210) Block Storage: volume creation times out while creating 3TB volume because of too many containers
[ https://issues.apache.org/jira/browse/HDFS-12210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179972#comment-16179972 ] Chen Liang commented on HDFS-12210: --- Thanks [~msingh] for taking care of this and thanks [~anu] for the remind! +1 on the v002 patch, I've committed to the feature branch, thanks Mukul for the contribution! > Block Storage: volume creation times out while creating 3TB volume because of > too many containers > - > > Key: HDFS-12210 > URL: https://issues.apache.org/jira/browse/HDFS-12210 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12210-HDFS-7240.001.patch, > HDFS-12210-HDFS-7240.002.patch > > > Volume creation times out while creating 3TB volume because of too many > containers > {code} > [hdfs@ctr-e134-1499953498516-64773-01-03 ~]$ > /opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/bin/hdfs cblock -c bilbo disk1 3TB 4 > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > 17/07/28 09:32:40 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 17/07/28 09:32:40 INFO cli.CBlockCli: create volume:[bilbo, disk1, 3TB, 4] > 17/07/28 09:33:10 ERROR cli.CBlockCli: java.net.SocketTimeoutException: Call > From ctr-e134-1499953498516-64773-01-03.hwx.site/172.27.51.64 to > 0.0.0.0:9810 failed on socket timeout exception: > java.net.SocketTimeoutException: 3 millis timeout while waiting for > channel to be ready for read. ch : java.nio.channels.SocketChannel[connected > local=/172.27.51.64:59317 remote=/0.0.0.0:9810]; For more details see: > http://wiki.apache.org/hadoop/SocketTimeout > {code} > Looking into the logs it can be seen that the volume 614 containers were > created before the timeout. > {code} > 2017-07-28 09:32:40,853 INFO org.apache.hadoop.cblock.CBlockManager: Create > volume received: userName: bilbo volumeName: disk1 volumeSize: 3298534883328 > blockSize: 4096 > 2017-07-28 09:32:42,545 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#0 leader:172.27.50.192:9866 machines:[172.27.50.192:9866] > replication factor:1 > 2017-07-28 09:32:43,213 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#1 leader:172.27.51.65:9866 machines:[172.27.51.65:9866] > replication factor:1 > 2017-07-28 09:32:43,484 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#2 leader:172.27.50.192:9866 machines:[172.27.50.192:9866] > replication factor:1 > . > . > . > . > 2017-07-28 09:35:01,712 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#612 leader:172.27.50.128:9866 machines:[172.27.50.128:9866] > replication factor:1 > 2017-07-28 09:35:01,963 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#613 leader:172.27.50.128:9866 machines:[172.27.50.128:9866] > replication factor:1 > 2017-07-28 09:35:02,256 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#614 leader:172.27.50.192:9866 machines:[172.27.50.192:9866] > replication factor:1 > 2017-07-28 09:35:02,358 INFO org.apache.hadoop.cblock.CBlockManager: Create > volume received: userName: bilbo volumeName: disk2 volumeSize: 1099511627776 > blockSize: 4096 > 2017-07-28 09:35:02,368 WARN org.apache.hadoop.ipc.Server: IPC Server handler > 0 on 9810, call Call#0 Retry#0 > org.apache.hadoop.cblock.protocolPB.CBlockServiceProtocol.createVolume from > 172.27.51.64:59 > 317: output error > 2017-07-28 09:35:02,369 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 0 on 9810 caught an exception > java.nio.channels.ClosedChannelException > at > sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) > at org.apache.hadoop.ipc.Server.channelWrite(Server.java:3242) > at org.apache.hadoop.ipc.Server.access$1700(Server.java:137) > at >
[jira] [Updated] (HDFS-12210) Block Storage: volume creation times out while creating 3TB volume because of too many containers
[ https://issues.apache.org/jira/browse/HDFS-12210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12210: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Block Storage: volume creation times out while creating 3TB volume because of > too many containers > - > > Key: HDFS-12210 > URL: https://issues.apache.org/jira/browse/HDFS-12210 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12210-HDFS-7240.001.patch, > HDFS-12210-HDFS-7240.002.patch > > > Volume creation times out while creating 3TB volume because of too many > containers > {code} > [hdfs@ctr-e134-1499953498516-64773-01-03 ~]$ > /opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/bin/hdfs cblock -c bilbo disk1 3TB 4 > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > 17/07/28 09:32:40 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 17/07/28 09:32:40 INFO cli.CBlockCli: create volume:[bilbo, disk1, 3TB, 4] > 17/07/28 09:33:10 ERROR cli.CBlockCli: java.net.SocketTimeoutException: Call > From ctr-e134-1499953498516-64773-01-03.hwx.site/172.27.51.64 to > 0.0.0.0:9810 failed on socket timeout exception: > java.net.SocketTimeoutException: 3 millis timeout while waiting for > channel to be ready for read. ch : java.nio.channels.SocketChannel[connected > local=/172.27.51.64:59317 remote=/0.0.0.0:9810]; For more details see: > http://wiki.apache.org/hadoop/SocketTimeout > {code} > Looking into the logs it can be seen that the volume 614 containers were > created before the timeout. > {code} > 2017-07-28 09:32:40,853 INFO org.apache.hadoop.cblock.CBlockManager: Create > volume received: userName: bilbo volumeName: disk1 volumeSize: 3298534883328 > blockSize: 4096 > 2017-07-28 09:32:42,545 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#0 leader:172.27.50.192:9866 machines:[172.27.50.192:9866] > replication factor:1 > 2017-07-28 09:32:43,213 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#1 leader:172.27.51.65:9866 machines:[172.27.51.65:9866] > replication factor:1 > 2017-07-28 09:32:43,484 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#2 leader:172.27.50.192:9866 machines:[172.27.50.192:9866] > replication factor:1 > . > . > . > . > 2017-07-28 09:35:01,712 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#612 leader:172.27.50.128:9866 machines:[172.27.50.128:9866] > replication factor:1 > 2017-07-28 09:35:01,963 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#613 leader:172.27.50.128:9866 machines:[172.27.50.128:9866] > replication factor:1 > 2017-07-28 09:35:02,256 INFO > org.apache.hadoop.scm.client.ContainerOperationClient: Created container > bilbo:disk1#614 leader:172.27.50.192:9866 machines:[172.27.50.192:9866] > replication factor:1 > 2017-07-28 09:35:02,358 INFO org.apache.hadoop.cblock.CBlockManager: Create > volume received: userName: bilbo volumeName: disk2 volumeSize: 1099511627776 > blockSize: 4096 > 2017-07-28 09:35:02,368 WARN org.apache.hadoop.ipc.Server: IPC Server handler > 0 on 9810, call Call#0 Retry#0 > org.apache.hadoop.cblock.protocolPB.CBlockServiceProtocol.createVolume from > 172.27.51.64:59 > 317: output error > 2017-07-28 09:35:02,369 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 0 on 9810 caught an exception > java.nio.channels.ClosedChannelException > at > sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) > at org.apache.hadoop.ipc.Server.channelWrite(Server.java:3242) > at org.apache.hadoop.ipc.Server.access$1700(Server.java:137) > at > org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1466) > at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1536) >
[jira] [Commented] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery
[ https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179956#comment-16179956 ] Andrew Wang commented on HDFS-12482: Hi Eddy, thanks for working on this, LGTM overall. A few little comments: * We should add documentation in hdfs-default.xml and ErasureCoding.md. * Do you have any indication (based on testing) for a better default than 1.0f? > Provide a configuration to adjust the weight of EC recovery tasks to adjust > the speed of recovery > - > > Key: HDFS-12482 > URL: https://issues.apache.org/jira/browse/HDFS-12482 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12482.00.patch > > > The relative speed of EC recovery comparing to 3x replica recovery is a > function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). > Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of > sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN > uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the > DataNode this we can add a coefficient for user to tune the weight of EC > recovery tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12038) Ozone: Non-admin user is unable to run InfoVolume to the volume owned by itself
[ https://issues.apache.org/jira/browse/HDFS-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179913#comment-16179913 ] Anu Engineer commented on HDFS-12038: - [~nandakumar131] When you get a chance, can you please take a look at this? Thanks in advance. > Ozone: Non-admin user is unable to run InfoVolume to the volume owned by > itself > --- > > Key: HDFS-12038 > URL: https://issues.apache.org/jira/browse/HDFS-12038 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Lokesh Jain > Labels: ozoneMerge > Attachments: HDFS-12038-HDFS-7240.001.patch > > > Reproduce steps > 1. Create a volume with a non-admin user > {code} > hdfs oz -createVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user > wwei -root -quota 2TB > {code} > 2. Run infoVolume command to get this volume info > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > Command Failed : > {"httpCode":400,"shortMessage":"badAuthorization","resource":null,"message":"Missing > authorization or authorization has to be > unique.","requestID":"221efb47-72b9-498d-ac19-907257428573","hostName":"ozone1.fyre.ibm.com"} > {noformat} > add {{-root}} to run as admin user could bypass this issue > {noformat} > hdfs oz -infoVolume http://ozone1.fyre.ibm.com:9864/volume-wwei-0 -user wwei > -root > { > "owner" : { > "name" : "wwei" > }, > "quota" : { > "unit" : "TB", > "size" : 2 > }, > "volumeName" : "volume-wwei-0", > "createdOn" : null, > "createdBy" : "hdfs" > } > {noformat} > expecting: both volume owner and admin should be able to run infoVolume > command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12543: Labels: ozoneMerge (was: ) > Ozone : allow create key without specifying size > > > Key: HDFS-12543 > URL: https://issues.apache.org/jira/browse/HDFS-12543 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Chen Liang >Assignee: Chen Liang > Labels: ozoneMerge > Attachments: HDFS-12543-HDFS-7240.001.patch > > > Currently when creating a key, it is required to specify the total size of > the key. This makes it inconvenient for the case where a key is created and > data keeps coming and being appended. This JIRA is remove the requirement of > specifying the size on key creation, and allows appending to the key > indefinitely. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
[ https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12454: -- Attachment: HDFS-12454-HDFS-7240.005.patch Re-submit as v005 patch to trigger Jenkins. > Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work > -- > > Key: HDFS-12454 > URL: https://issues.apache.org/jira/browse/HDFS-12454 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Blocker > Labels: ozoneMerge > Attachments: HDFS-12454-HDFS-7240.001.patch, > HDFS-12454-HDFS-7240.002.patch, HDFS-12454-HDFS-7240.003.patch, > HDFS-12454-HDFS-7240.004.patch, HDFS-12454-HDFS-7240.005.patch > > > In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there > are a few issues with it. > 1. > {code} > > ozone.scm.block.client.address > scm.hadoop.apache.org > > > ozone.ksm.address > ksm.hadoop.apache.org > > {code} > The value should be an address instead. > 2. > {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires > {{ozone.scm.client.address}} to be set, which is missing from this sample > file. Missing this config will seem to cause failure on starting datanode. > 3. > {code} > > ozone.scm.names > scm.hadoop.apache.org > > {code} > This value did not make much sense to, I found the comment in > {{ScmConfigKeys}} that says > {code} > // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT. > // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7: > {code} > So maybe we should write something like scm1 as value here. > 4. I'm not entirely sure about this, but > [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says > {code} > > ozone.handler.type > local > > {code} > is also part of minimum setting, do we need to add this [~anu]? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12543: -- Attachment: HDFS-12543-HDFS-7240.001.patch Post v001 patch. In this patch, size and its set/get are still present, to not to break unit tests. But with this patch, the size is no longer a hard requirement, but just an attribute of the key, it's just info instead of constraint. It might be better to remove size completely from {{KsmKeyArgs}} I would like to leave it as a separate JIRA after this is done. some clarification notes for reviewers: 1. if client sets a size and then close the key, this size will still be passed to server and visible when getting the key. While if client does not set the size, the size will be 0. But either case, as long as some data is written and committed, this value will always be the number of bytes that has been written by the client. 2. regardless of whether the size is set or not on key create, client can always keep writing (so one of the existing unit test is removed). Client starts with one block when the key is opened, whenever client writes beyond the current block, it requests for another. 3. for now, there is no append or random re-write on the key, so when a key is written, it gets overwritten completely from the beginning. 4. currently if client fails before commit, the open key entry and allocated blocks are NOT being cleaned up 5. multiple open can happen, all commit will success, but later one always overwrites earlier ones. Thanks [~anu] and [~jnp] for the offline discussion! > Ozone : allow create key without specifying size > > > Key: HDFS-12543 > URL: https://issues.apache.org/jira/browse/HDFS-12543 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-12543-HDFS-7240.001.patch > > > Currently when creating a key, it is required to specify the total size of > the key. This makes it inconvenient for the case where a key is created and > data keeps coming and being appended. This JIRA is remove the requirement of > specifying the size on key creation, and allows appending to the key > indefinitely. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12524) Ozone: Record number of keys scanned and hinted for getRangeKVs call
[ https://issues.apache.org/jira/browse/HDFS-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179903#comment-16179903 ] Anu Engineer commented on HDFS-12524: - one quick question. I am not sure what the hinted stands for? is it keys where the prefix matched? > Ozone: Record number of keys scanned and hinted for getRangeKVs call > > > Key: HDFS-12524 > URL: https://issues.apache.org/jira/browse/HDFS-12524 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: logging, ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Minor > Labels: ozoneMerge, performance > Attachments: HDFS-12524-HDFS-7240.001.patch > > > Add debug logging to record number of keys scanned and hinted for > {{getRangeKVs}} calls, this will be helpful to debug performance issues since > {{getRangeKVs}} is often the place where we get the lag. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12543: -- Status: Patch Available (was: Open) > Ozone : allow create key without specifying size > > > Key: HDFS-12543 > URL: https://issues.apache.org/jira/browse/HDFS-12543 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Chen Liang >Assignee: Chen Liang > Attachments: HDFS-12543-HDFS-7240.001.patch > > > Currently when creating a key, it is required to specify the total size of > the key. This makes it inconvenient for the case where a key is created and > data keeps coming and being appended. This JIRA is remove the requirement of > specifying the size on key creation, and allows appending to the key > indefinitely. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12543) Ozone : allow create key without specifying size
Chen Liang created HDFS-12543: - Summary: Ozone : allow create key without specifying size Key: HDFS-12543 URL: https://issues.apache.org/jira/browse/HDFS-12543 Project: Hadoop HDFS Issue Type: Improvement Reporter: Chen Liang Assignee: Chen Liang Currently when creating a key, it is required to specify the total size of the key. This makes it inconvenient for the case where a key is created and data keeps coming and being appended. This JIRA is remove the requirement of specifying the size on key creation, and allows appending to the key indefinitely. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file
[ https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179859#comment-16179859 ] Ajay Kumar commented on HDFS-12162: --- [~yzhangal],[~anu],[~elek] thanks for review and commit. Create follow up jira ([HDFS-12542]) to fix javadoc. Seems documentation for directory listing is little outdated as well. Will address both in new jira. > Update listStatus document to describe the behavior when the argument is a > file > --- > > Key: HDFS-12162 > URL: https://issues.apache.org/jira/browse/HDFS-12162 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, httpfs >Reporter: Yongjun Zhang >Assignee: Ajay Kumar > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12162.01.patch, Screen Shot 2017-08-03 at 11.01.46 > AM.png, Screen Shot 2017-08-03 at 11.02.19 AM.png > > > The listStatus method can take in either directory path or file path as > input, however, currently both the javadoc and external document describe it > as only taking directory as input. This jira is to update the document about > the behavior when the argument is a file path. > Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this > jira is the result of our discussion there. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file
[ https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179859#comment-16179859 ] Ajay Kumar edited comment on HDFS-12162 at 9/25/17 10:03 PM: - [~yzhangal],[~anu],[~elek] thanks for review and commit. Created follow up jira ([HDFS-12542]) to fix javadoc. Seems documentation for directory listing is little outdated as well. Will address both in new jira. was (Author: ajayydv): [~yzhangal],[~anu],[~elek] thanks for review and commit. Create follow up jira ([HDFS-12542]) to fix javadoc. Seems documentation for directory listing is little outdated as well. Will address both in new jira. > Update listStatus document to describe the behavior when the argument is a > file > --- > > Key: HDFS-12162 > URL: https://issues.apache.org/jira/browse/HDFS-12162 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, httpfs >Reporter: Yongjun Zhang >Assignee: Ajay Kumar > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12162.01.patch, Screen Shot 2017-08-03 at 11.01.46 > AM.png, Screen Shot 2017-08-03 at 11.02.19 AM.png > > > The listStatus method can take in either directory path or file path as > input, however, currently both the javadoc and external document describe it > as only taking directory as input. This jira is to update the document about > the behavior when the argument is a file path. > Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this > jira is the result of our discussion there. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly
[ https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179856#comment-16179856 ] Wei-Chiu Chuang commented on HDFS-12458: Hi [~xiaochen] looks like a good approach to address the problem. Two minor questions: 1. Say waitActive() waits until all datanode send block reports, and NN would wait for a few seconds to leave safemode, it sounds to me waitClusterUp() should be invoked after waitActive. 2. In testReencryptionWithoutProvider, why did you replace waitActive with waitClusterUp, rather than keeping waitActive? Thanks > TestReencryptionWithKMS fails regularly > --- > > Key: HDFS-12458 > URL: https://issues.apache.org/jira/browse/HDFS-12458 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, test >Affects Versions: 3.0.0-beta1 >Reporter: Konstantin Shvachko >Assignee: Xiao Chen > Attachments: HDFS-12458.01.patch, HDFS-12458.02.patch > > > {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12542: -- Description: Follow up jira to update javadoc and documentation for listStatus. [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] (was: Follow up jira to update javadoc and documentation for listStatus. [HDFS-12162,https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910]) > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12542: -- Description: Follow up jira to update javadoc and documentation for listStatus. [HDFS-12162,https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] (was: Follow up jira to update javadoc and documentation for listStatus. ([#HDFS-12162])) > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162,https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12542) Update javadoc and documentation for listStatus
Ajay Kumar created HDFS-12542: - Summary: Update javadoc and documentation for listStatus Key: HDFS-12542 URL: https://issues.apache.org/jira/browse/HDFS-12542 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ajay Kumar Assignee: Ajay Kumar Follow up jira to update javadoc and documentation for listStatus. ([#HDFS-12162]) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12529) Get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179811#comment-16179811 ] Ajay Kumar commented on HDFS-12529: --- [~anu], Thanks for review and commit. > Get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch, HDFS-12529.04.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179808#comment-16179808 ] Ajay Kumar commented on HDFS-12455: --- [~anu], thanks for review. > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12529) Get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179804#comment-16179804 ] Hudson commented on HDFS-12529: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12970 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12970/]) HDFS-12529. Get source for config tags from file name. Contributed by (aengineer: rev 0889e5a8b7102ca1b64af6806537ad99c2018dfd) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java > Get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch, HDFS-12529.04.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179799#comment-16179799 ] Rushabh S Shah commented on HDFS-12386: --- I think patch#7 is the final good patch. Test failures: {noformat} Running org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.382 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.32 sec - in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting Running org.apache.hadoop.hdfs.server.namenode.TestReencryptionWithKMS Tests run: 32, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 182.293 sec <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestReencryptionWithKMS testCancelFuture(org.apache.hadoop.hdfs.server.namenode.TestReencryptionWithKMS) Time elapsed: 1.192 sec <<< FAILURE! java.lang.AssertionError: expected:<0> but was:<5> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hdfs.server.namenode.TestReencryption.testCancelFuture(TestReencryption.java:1548) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {noformat} {{TestReencryption}} is a flaky test and is being tracked by {{HDFS-12458}}. [~daryn] mind reviewing for the last time. Hopefully we don't need more revisions. > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.002.patch, HDFS-12386-branch-2.8.001.patch, > HDFS-12386-branch-2.8.002.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179778#comment-16179778 ] Anu Engineer commented on HDFS-12455: - Thanks for updating the patch. Looks good to me, +1, pending Jenkins. Just FYI: There are two space only changes -- I will fix them while committing, you don't need to provide a new patch. {{FileStatus.java: 512, ClientNamenodeProtocolServerSideTranslatorPB.java}} > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12455: -- Attachment: HDFS-12455.03.patch > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12455: -- Attachment: (was: HDFS-12455.03.patch) > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12529) Get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12529: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) [~ajayydv] Thank you for the contribution. I have committed this to the trunk. > Get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch, HDFS-12529.04.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179756#comment-16179756 ] Hadoop QA commented on HDFS-12291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 7s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 18s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}135m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestAppendDifferentChecksum | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | |
[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179755#comment-16179755 ] Hadoop QA commented on HDFS-12386: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 49s{color} | {color:red} hadoop-hdfs-project generated 1 new + 450 unchanged - 0 fixed = 451 total (was 450) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 418 unchanged - 0 fixed = 419 total (was 418) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12386 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888921/HDFS-12386-7.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3254c5f04d5c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e928ee5 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21346/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project.txt | | checkstyle |
[jira] [Updated] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12455: -- Attachment: HDFS-12455.03.patch > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, > HDFS-12455.03.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12529) Get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12529: Summary: Get source for config tags from file name (was: get source for config tags from file name) > Get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch, HDFS-12529.04.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status
[ https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179751#comment-16179751 ] Ajay Kumar commented on HDFS-12455: --- [~anu], Thanks for review. bq. nit : final boolean seBit --> rename as snBit ? I was not able to understand what se stood for. seBit stands for snapshotEnabledBit. Changed Wrapper name to {{snapshotEnabledBit}} to avoid confusion. bq. Just wondering do we have to modify this if statement ? {{if (aBit || eBit || ecBit) }} Yes,A hdfs directory might be snapshot enabled irrespective of other 3 flags. To simplify the change i have updated the if condition as {{if (aBit || eBit || ecBit || seBit)}} bq. I see that we have 14 params already to FileStatus constructor, but since these are bits, would you consider adding a new param, or adding a new ctor that takes a set or add a FileStatus Builder? Any of those params will be consistent with the current code. But please don't touch the current constructor. Yes, current constructor already has too many parameters. To avoid modifying existing constructor or adding new one i have added set function. i.e {{setSnapShotEnabledFlag(boolean)}} .Wanted to minimize the change for this jira so didn't added builder initially. Let me know if i should add builder as part of this jira itself? bq. can you also re-write the Updated the segment with else condition in patch v3. > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status > -- > > Key: HDFS-12455 > URL: https://issues.apache.org/jira/browse/HDFS-12455 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch > > > WebHDFS - ListStatus query does not provide any information about a folder's > "snapshot enabled" status. Since "ListStatus" lists other attributes it will > be good to include this attribute as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12529) get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179750#comment-16179750 ] Anu Engineer commented on HDFS-12529: - [~ajayydv] Thank you for updating the patch. +1, I will commit this shortly. > get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch, HDFS-12529.04.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179700#comment-16179700 ] Hadoop QA commented on HDFS-12498: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 59s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 85 unchanged - 0 fixed = 89 total (was 85) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12498 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888912/HDFS-12498.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2ec8c29afc27 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e928ee5 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21344/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21344/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21344/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21344/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was
[jira] [Commented] (HDFS-12518) Re-encryption should handle task cancellation and progress better
[ https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179691#comment-16179691 ] Hadoop QA commented on HDFS-12518: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}136m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestReencryption | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12518 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888193/HDFS-12518.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c26ed63b523f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e928ee5 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21343/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21343/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21343/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Re-encryption should handle task cancellation and progress better > - > > Key:
[jira] [Comment Edited] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags
[ https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179665#comment-16179665 ] Ajay Kumar edited comment on HDFS-12513 at 9/25/17 8:02 PM: [~anu],[~xyao],[~nandakumar131] Thanks for discussion and input on UI mockup. [~cheersyang], Attached mock ui. Intention is to help users find out relevant configs quickly. I am planning to add a html and js file which will interact with conf servlet to get relevant configs. Suggestion, feedback is welcome. was (Author: ajayydv): mock ui > Ozone: Create UI page to show Ozone configs by tags > --- > > Key: HDFS-12513 > URL: https://issues.apache.org/jira/browse/HDFS-12513 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: OzoneSettings.png > > > Create UI page to show Ozone configs by tags -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12532) DN Reg can Fail when principal doesn't contain hostname and floatingIP is configured.
[ https://issues.apache.org/jira/browse/HDFS-12532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179666#comment-16179666 ] Daryn Sharp commented on HDFS-12532: bq. Floating IP configured for HA purpose.Such that Active and standby can use same IP to access webUI. bq. Here datanode registered with X.Y.Y.1 and X.Y.Y.100 is floating IP which is used for further communication which will fail. Ok. You're using ip failover, like we also do. The stack trace shows that: # The NN is "X.Y.Z.1". # The NN sees the DN connection from "X.Y.Y.100" # The DN's registration is self-identifying as "X.Y.Y.1" # The DN's BP id contains "X.Y.Y.1" which should be the NN's ip... This tells me the DN is running on the NN. "X.Y.Z.1" doesn't exist. The floating ip "X.Y.Y.100" is the interface ip, "X.Y.Y.1" is ip aliased to it which is why non-explicitly bound connections from the DN originate from "X.Y.Y.100". (In practice you'll probably want the opposite, ie. your floating ip to be aliased, which would incidentally "fix" your issue) Binding to loopback per the proposal only works when the DN is running on the NN. It won't work if the hosts are different. {quote} bq. I also not keen on having more confs w/o having a clearly stated use case. Ok, then we can make it configurable..? {quote} No. This appears to be a self-inflicted injury in perhaps a test environment. I recommend either switching your interface and aliased ips, or set {{dfs.namenode.datanode.registration.ip-hostname-check=false}}. > DN Reg can Fail when principal doesn't contain hostname and floatingIP is > configured. > - > > Key: HDFS-12532 > URL: https://issues.apache.org/jira/browse/HDFS-12532 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > > Configure principal without hostname (i.e hdfs/had...@hadoop.com) > Configure floatingIP > Start Cluster. > Here DN will fail to register as it can take IP which is not in "/etc/hosts". -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags
[ https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12513: -- Attachment: OzoneSettings.png mock ui > Ozone: Create UI page to show Ozone configs by tags > --- > > Key: HDFS-12513 > URL: https://issues.apache.org/jira/browse/HDFS-12513 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: OzoneSettings.png > > > Create UI page to show Ozone configs by tags -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12420) Add an option to disallow 'namenode format -force'
[ https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179643#comment-16179643 ] Hadoop QA commented on HDFS-12420: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 526 unchanged - 0 fixed = 529 total (was 526) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestClusterId | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12420 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888907/HDFS-12420.10.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux d50c70392a84 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e928ee5 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21342/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21342/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21342/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21342/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org |
[jira] [Commented] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179638#comment-16179638 ] Bharat Viswanadham commented on HDFS-12498: --- {code:java} In DFSUtil#getJournalNodeAddresses(), we first convert the qjournal uri to a set of journal node InetSocketAddress's and then convert that to Set. In JournalNodeSyncer#getOtherJournalNodeAddrs(), this Set of journal node address's is again converted to InetSocketAddress's. We should avoid the double work here. DFSUtil#getJournalNodeAddresses() can be broken down into two methods: Set getJournalNodeSocketAddresses(Configuration conf); Set getJournalNodeAddresses(Configuration conf); {code} [~hanishakoneru] I have also thought about this. but if i add another method for this, it will be a lot of common code among these methods. I have done this, so that I can reuse the most of the existing code. And also even if this conversion is done, this is a one time thing, only it will be done during journal node startup. Let me know your thoughts on this. > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12320) Add quantiles for transactions batched in Journal sync
[ https://issues.apache.org/jira/browse/HDFS-12320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179595#comment-16179595 ] Hanisha Koneru commented on HDFS-12320: --- Thanks [~anu] > Add quantiles for transactions batched in Journal sync > --- > > Key: HDFS-12320 > URL: https://issues.apache.org/jira/browse/HDFS-12320 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics, namenode >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.1.0 > > Attachments: HDFS-12320.001.patch > > > We currently track the overall count of the transactions which were batch > during journal sync through the metric _TransactionsBatchedInSync_. It will > be useful to to have a quantile to measure the transactions batched together > over a period. This would give a better understanding of the distribution of > the batching. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179561#comment-16179561 ] Hanisha Koneru commented on HDFS-12498: --- Thanks [~bharatviswa] for the patch. Few comments: - In {{DFSUtil#getJournalNodeAddresses()}}, we first convert the qjournal uri to a set of journal node InetSocketAddress's and then convert that to Set. In {{JournalNodeSyncer#getOtherJournalNodeAddrs()}}, this _Set_ of journal node address's is again converted to InetSocketAddress's. We should avoid the double work here. {{DFSUtil#getJournalNodeAddresses()}} can be broken down into two methods: {quote} Set getJournalNodeSocketAddresses(Configuration conf); Set getJournalNodeAddresses(Configuration conf); {quote} The second method can be called from {{GetConf}} and the first one from {{JournalNodeSyncer}}. No need for the new _portrequired_ variable also. - The following piece of code in {{DFSUtil#getJournalNodeAddresses()}} can be optimized by using the _List#remove()_ method instead of _List#removeAll()_ method (as only one element is removed). {code} journalnodeSocketAddressList.removeAll(Sets.newHashSet(jn.getBoundIpcAddress())); {code} > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12420) Add an option to disallow 'namenode format -force'
[ https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179557#comment-16179557 ] Hadoop QA commented on HDFS-12420: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 525 unchanged - 0 fixed = 528 total (was 525) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}117m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.TestClusterId | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12420 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888907/HDFS-12420.10.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 0cc8953dda46 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3a10367 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21340/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21340/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21340/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21340/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This
[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179540#comment-16179540 ] Surendra Singh Lilhore commented on HDFS-12291: --- Thanks [~xiaochen], Attached v7 patch. Fixed failed test case.. > [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy > of all the files under the given dir > - > > Key: HDFS-12291 > URL: https://issues.apache.org/jira/browse/HDFS-12291 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12291-HDFS-10285-01.patch, > HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, > HDFS-12291-HDFS-10285-04.patch, HDFS-12291-HDFS-10285-05.patch, > HDFS-12291-HDFS-10285-06.patch, HDFS-12291-HDFS-10285-07.patch > > > For the given source path directory, presently SPS consider only the files > immediately under the directory(only one level of scanning) for satisfying > the policy. It WON’T do recursive directory scanning and then schedules SPS > tasks to satisfy the storage policy of all the files till the leaf node. > The idea of this jira is to discuss & implement an efficient recursive > directory iteration mechanism and satisfies storage policy for all the files > under the given directory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12509) Ozone: Revert files not related to ozone change in HDFS-7240 branch
[ https://issues.apache.org/jira/browse/HDFS-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179537#comment-16179537 ] Hadoop QA commented on HDFS-12509: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 29s{color} | {color:green} HDFS-7240 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 53s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 38s{color} | {color:red} root generated 2 new + 1306 unchanged - 2 fixed = 1308 total (was 1308) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} root: The patch generated 13 new + 31 unchanged - 3 fixed = 44 total (was 34) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 89 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 29s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} hadoop-nfs in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 7s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 16s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 48s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 27s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} |
[jira] [Updated] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-12291: -- Attachment: HDFS-12291-HDFS-10285-07.patch > [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy > of all the files under the given dir > - > > Key: HDFS-12291 > URL: https://issues.apache.org/jira/browse/HDFS-12291 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12291-HDFS-10285-01.patch, > HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, > HDFS-12291-HDFS-10285-04.patch, HDFS-12291-HDFS-10285-05.patch, > HDFS-12291-HDFS-10285-06.patch, HDFS-12291-HDFS-10285-07.patch > > > For the given source path directory, presently SPS consider only the files > immediately under the directory(only one level of scanning) for satisfying > the policy. It WON’T do recursive directory scanning and then schedules SPS > tasks to satisfy the storage policy of all the files till the leaf node. > The idea of this jira is to discuss & implement an efficient recursive > directory iteration mechanism and satisfies storage policy for all the files > under the given directory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179508#comment-16179508 ] Rushabh S Shah edited comment on HDFS-12386 at 9/25/17 6:23 PM: Attaching all the branches patch addressing the check-style issues and 1 spelling mistake in trunk patch. IMO, not worth to waste useful build resources. {noformat} diff ~/patches/jira/HDFS-12386-6.patch ~/patches/jira/HDFS-12386-7.patch 2c2 < index dcd73bfc7eb..56d573d5e2e 100644 --- > index dcd73bfc7eb..53d886df6b3 100644 46,47c46,47 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); 133c133 < index e4008479fae..47814d0b773 100644 --- > index e4008479fae..9e0a1ed8193 100644 173c173 < + * Pleae don't use it otherwise. --- > + * Please don't use it otherwise. {noformat} {noformat} diff ~/patches/jira/HDFS-12386-branch-2.001.patch ~/patches/jira/HDFS-12386-branch-2.002.patch 2c2 < index 43bb17f733d..769589c4606 100644 --- > index 43bb17f733d..0320614af97 100644 47,48c47,48 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); {noformat} {noformat} diff ~/patches/jira/HDFS-12386-branch-2.8.001.patch ~/patches/jira/HDFS-12386-branch-2.8.002.patch 2c2 < index b1c270b9c2a..1775ec2504f 100644 --- > index b1c270b9c2a..bfc770ea4ee 100644 47,48c47,48 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); {noformat} was (Author: shahrs87): Attaching all the branches patch addressing the check-style issues and 1 spelling mistakes in trunk patch. IMO, not worth to waste useful build resources. {noformat} diff ~/patches/jira/HDFS-12386-6.patch ~/patches/jira/HDFS-12386-7.patch 2c2 < index dcd73bfc7eb..56d573d5e2e 100644 --- > index dcd73bfc7eb..53d886df6b3 100644 46,47c46,47 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); 133c133 < index e4008479fae..47814d0b773 100644 --- > index e4008479fae..9e0a1ed8193 100644 173c173 < + * Pleae don't use it otherwise. --- > + * Please don't use it otherwise. {noformat} {noformat} diff ~/patches/jira/HDFS-12386-branch-2.001.patch ~/patches/jira/HDFS-12386-branch-2.002.patch 2c2 < index 43bb17f733d..769589c4606 100644 --- > index 43bb17f733d..0320614af97 100644 47,48c47,48 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); {noformat} {noformat} diff ~/patches/jira/HDFS-12386-branch-2.8.001.patch ~/patches/jira/HDFS-12386-branch-2.8.002.patch 2c2 < index b1c270b9c2a..1775ec2504f 100644 --- > index b1c270b9c2a..bfc770ea4ee 100644 47,48c47,48 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); {noformat} > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.002.patch, HDFS-12386-branch-2.8.001.patch, > HDFS-12386-branch-2.8.002.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: HDFS-12386-7.patch HDFS-12386-branch-2.002.patch HDFS-12386-branch-2.8.002.patch Attaching all the branches patch addressing the check-style issues and 1 spelling mistakes in trunk patch. IMO, not worth to waste useful build resources. {noformat} diff ~/patches/jira/HDFS-12386-6.patch ~/patches/jira/HDFS-12386-7.patch 2c2 < index dcd73bfc7eb..56d573d5e2e 100644 --- > index dcd73bfc7eb..53d886df6b3 100644 46,47c46,47 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); 133c133 < index e4008479fae..47814d0b773 100644 --- > index e4008479fae..9e0a1ed8193 100644 173c173 < + * Pleae don't use it otherwise. --- > + * Please don't use it otherwise. {noformat} {noformat} diff ~/patches/jira/HDFS-12386-branch-2.001.patch ~/patches/jira/HDFS-12386-branch-2.002.patch 2c2 < index 43bb17f733d..769589c4606 100644 --- > index 43bb17f733d..0320614af97 100644 47,48c47,48 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); {noformat} {noformat} diff ~/patches/jira/HDFS-12386-branch-2.8.001.patch ~/patches/jira/HDFS-12386-branch-2.8.002.patch 2c2 < index b1c270b9c2a..1775ec2504f 100644 --- > index b1c270b9c2a..bfc770ea4ee 100644 47,48c47,48 < +Map m = < +(Map) json.get(FsServerDefaults.class.getSimpleName()); --- > +Map m = > +(Map) json.get(FsServerDefaults.class.getSimpleName()); {noformat} > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-7.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.002.patch, HDFS-12386-branch-2.8.001.patch, > HDFS-12386-branch-2.8.002.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin
[ https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179483#comment-16179483 ] Andrew Wang commented on HDFS-12257: Hi Huafeng, one higher level question, should we add a RemoteIterator paginated version of this API and use it instead? I'd prefer not to add new APIs that return an array, since it's possible that there could be many snapshottable dirs. > Expose getSnapshottableDirListing as a public API in HdfsAdmin > -- > > Key: HDFS-12257 > URL: https://issues.apache.org/jira/browse/HDFS-12257 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch > > > Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no > programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we > should expose listing there as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12511) Ozone: Add tags to config
[ https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12511: Labels: ozoneMerge (was: ) > Ozone: Add tags to config > - > > Key: HDFS-12511 > URL: https://issues.apache.org/jira/browse/HDFS-12511 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12511.01.patch > > > Add tags to ozone config: > Example: > {code} > > ozone.ksm.handler.count.key > 200 > OZONE,PERFORMANCE,KSM > > The number of RPC handler threads for each KSM service endpoint. > > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179471#comment-16179471 ] Hadoop QA commented on HDFS-12386: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 46s{color} | {color:red} hadoop-hdfs-project generated 1 new + 450 unchanged - 0 fixed = 451 total (was 450) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 46s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 418 unchanged - 0 fixed = 421 total (was 418) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 18s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}139m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12386 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1294/HDFS-12386-6.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e632e63d3f80 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0807470 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21339/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project.txt | | checkstyle |
[jira] [Commented] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation
[ https://issues.apache.org/jira/browse/HDFS-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179461#comment-16179461 ] Andrew Wang commented on HDFS-12534: Hi [~HuafengWang], thanks for taking a look, let me try to explain in more detail based on my understanding from talking with Marcelo: * When writing, applications write with awareness of the blocksize. They will try to pad to block boundaries, and expect the file to be splittable at the block boundaries. * When reading, applications use the BlockLocations returned by HDFS to understand where the split points are. * With the current EC scheme, since an entire block group is represented by a single BlockLocation, we won't get as much parallelism as we'd like. For instance, with RS(6,3) with a blocksize of 100MB, a 600MB file would be written to have six split points, but only have a single BlockLocation for the entire block group. I haven't looked at FileInputFormat yet to figure out how this works for S3. > Provide logical BlockLocations for EC files for better split calculation > > > Key: HDFS-12534 > URL: https://issues.apache.org/jira/browse/HDFS-12534 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang > Labels: hdfs-ec-3.0-must-do > > I talked to [~vanzin] and [~alex.behm] some more about split calculation with > EC. It turns out HDFS-1 was resolved prematurely. Applications depend on > HDFS BlockLocation to understand where the split points are. The current > scheme of returning one BlockLocation per block group loses this information. > We should change this to provide logical blocks. Divide the file length by > the block size and provide suitable BlockLocations to match, with virtual > offsets and lengths too. > I'm not marking this as incompatible, since changing it this way would in > fact make it more compatible from the perspective of applications that are > scheduling against replicated files. Thus, it'd be good for beta1 if > possible, but okay for later too. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently
[ https://issues.apache.org/jira/browse/HDFS-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179449#comment-16179449 ] Eric Badger commented on HDFS-12495: Looks like Jenkins really doesn't want to run on this JIRA > TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently > -- > > Key: HDFS-12495 > URL: https://issues.apache.org/jira/browse/HDFS-12495 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2 >Reporter: Eric Badger >Assignee: Eric Badger > Labels: flaky-test > Attachments: HDFS-12495.001.patch > > > {noformat} > java.net.BindException: Problem binding to [localhost:36701] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:546) > at org.apache.hadoop.ipc.Server$Listener.(Server.java:955) > at org.apache.hadoop.ipc.Server.(Server.java:2655) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546) > at > org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152) > at > org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: HDFS-12386-branch-2.001.patch HDFS-12386-branch-2.8.001.patch Since the latest patch is close to being a good patch, attaching branch-2 and branch-2.8 patch so that jenkins can test it. > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, > HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, > HDFS-12386-6.patch, HDFS-12386-branch-2.001.patch, > HDFS-12386-branch-2.8.001.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179428#comment-16179428 ] Bharat Viswanadham edited comment on HDFS-12498 at 9/25/17 5:51 PM: Now if dfs.shared.edits.dir is suffixed with nameserviceId or (namserviceId along with namenodeId) journal syncer is getting started. was (Author: bharatviswa): Now if dfs.shared.edits.dir is suffixed with nameserviceId or (namserviceId along with namenodeId) journal syncer is getting started. Tested it on federated cluster setup. > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179428#comment-16179428 ] Bharat Viswanadham commented on HDFS-12498: --- Now if dfs.shared.edits.dir is suffixed with nameserviceId or (namserviceId along with namenodeId) journal syncer is getting started. Tested it on federated cluster setup. > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12518) Re-encryption should handle task cancellation and progress better
[ https://issues.apache.org/jira/browse/HDFS-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179429#comment-16179429 ] Wei-Chiu Chuang commented on HDFS-12518: Precommit wasn't triggered. I'm triggering the build again. > Re-encryption should handle task cancellation and progress better > - > > Key: HDFS-12518 > URL: https://issues.apache.org/jira/browse/HDFS-12518 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.0.0-beta1 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12518.01.patch > > > Re-encryption should handle task cancellation and progress tracking better in > general. > In a recent internal report, a canceled re-encryption could lead to the > progress of the zone being 'Processing' forever. Sending a new cancel command > would make it complete, but new re-encryptions for the same zone wouldn't > work because the canceled future is not removed. > This jira proposes to fix that, and enhance the currently handling so new > command would start from a clean state. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12498: -- Status: Patch Available (was: In Progress) > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12498: -- Attachment: HDFS-12498.02.patch > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup
[ https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179419#comment-16179419 ] Ajay Kumar commented on HDFS-12516: --- [~anu],[~arpitagarwal] thanks for review and commit. > Suppress the fsnamesystem lock warning on nn startup > > > Key: HDFS-12516 > URL: https://issues.apache.org/jira/browse/HDFS-12516 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: 3.1.0 > > Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch, > HDFS-12516.03.patch > > > Whenever FsNameSystemLock is held for more than configured value of > {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an > entry in metrics. Loading FSImage from disk will usually cross this > threshold. We can suppress this FsNamesystem lock warning on NameNode startup. > {code} > 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held > for 7159 ms via > java.lang.Thread.getStackTrace(Thread.java:1552) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703) > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688) > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992) > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976) > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701) > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769) > Number of suppressed write-lock reports: 0 > Longest write-lock held interval: 7159 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12498: -- Attachment: (was: HDFS-12498.02.patch) > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster
[ https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12498: -- Attachment: HDFS-12498.02.patch > Journal Syncer is not started in Federated + HA cluster > --- > > Key: HDFS-12498 > URL: https://issues.apache.org/jira/browse/HDFS-12498 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, hdfs-site.xml > > > Journal Syncer is not getting started in HDFS + Federated cluster, when > dfs.shared.edits.dir.<> is provided, instead of > dfs.namenode.shared.edits.dir > *Log Snippet:* > {code:java} > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct > Shared Edits Uri > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode > addresses not available. Journal Syncing cannot be done > 2017-09-19 21:42:40,598 WARN > org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start > SyncJournal daemon for journal ns1 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11590) Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or not in the cache
[ https://issues.apache.org/jira/browse/HDFS-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179409#comment-16179409 ] Daryn Sharp commented on HDFS-11590: Hmm, the client used to treat invalid token as fatal for the renewer. It doesn't anymore. I'd rather see {{DFSClient#renewLease}} be consistent with most of the client methods: simply invoke the proxy method and unwrap exceptions. {{LeaseRenewer#renew}} can catch {{InvalidToken}} and consider it fatal. The catch block in the client can be hoisted into the renewer since that's where it really belongs. > Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or > not in the cache > > > Key: HDFS-11590 > URL: https://issues.apache.org/jira/browse/HDFS-11590 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.6.0 > Environment: Releases: > cloudera release cdh-5.5.0 > openjdk version "1.8.0_91" > linux centos6 servers > Cluster info: > Namenode and resourcemanager in HA with kerberos authentication > More than 1300 datanodes/nodemanagers >Reporter: Nicolas Fraison >Priority: Minor > Attachments: HDFS-11590.patch > > > We have faced some huge slowdowns on our namenode due to all our nodemanagers > continuing to retry to renew a lease and reconnecting to the namenode every > second during 1 hour due to some HDFS_DELEGATION_TOKEN being expired or not > in the cache. > The number of time_wait connection on our namenode was stuck to the maximum > configured of 250k during this period due to the reconnections each time. > {code} > 2017-03-02 11:51:42,817 INFO > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization successful for appattempt_1488396860014_156103_01 > (auth:TOKEN) for protocol=interface > org.apache.hadoop.yarn.api.ContainerManagementProtocolPB > 2017-03-02 11:51:43,414 INFO > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization successful for appattempt_1488396860014_156120_01 > (auth:TOKEN) for protocol=interface > org.apache.hadoop.yarn.api.ContainerManagementProtocolPB > 2017-03-02 11:51:51,994 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:prediction (auth:SIMPLE) > cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN org.apache.hadoop.ipc.Client: Exception > encountered while connecting to the server : > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:prediction (auth:SIMPLE) > cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to > renew lease for [DFSClient_NONMAPREDUCE_1560141256_4187204] for 30 seconds. > Will retry shortly ... > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > at org.apache.hadoop.ipc.Client.call(Client.java:1472) > at org.apache.hadoop.ipc.Client.call(Client.java:1403) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy20.renewLease(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571) > at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) > at com.sun.proxy.$Proxy21.renewLease(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:921) > at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423) > at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448) > at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) > at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) > at java.lang.Thread.run(Thread.java:745) > 2017-03-02
[jira] [Commented] (HDFS-12529) get source for config tags from file name
[ https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179404#comment-16179404 ] Ajay Kumar commented on HDFS-12529: --- Failed test case seems unrelated. It passes locally. > get source for config tags from file name > - > > Key: HDFS-12529 > URL: https://issues.apache.org/jira/browse/HDFS-12529 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch, > HDFS-12529.03.patch, HDFS-12529.04.patch > > > For tagging related properties together use resource name as source. > Currently it assumes source is configured in xml itself. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org