[jira] [Updated] (HDFS-12560) Remove the extra word "it" in HdfsUserGuide.md
[ https://issues.apache.org/jira/browse/HDFS-12560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fang zhenyi updated HDFS-12560: --- Fix Version/s: 3.0.0-alpha4 Target Version/s: 3.0.0-alpha4 Status: Patch Available (was: Open) > Remove the extra word "it" in HdfsUserGuide.md > --- > > Key: HDFS-12560 > URL: https://issues.apache.org/jira/browse/HDFS-12560 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: fang zhenyi >Priority: Trivial > Fix For: 3.0.0-alpha4 > > Attachments: HDFS-12560.001.patch > > > Since "it" is an extra word in fsck descrtiption, we should remove it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12560) Remove the extra word "it" in HdfsUserGuide.md
[ https://issues.apache.org/jira/browse/HDFS-12560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fang zhenyi updated HDFS-12560: --- Attachment: HDFS-12560.001.patch > Remove the extra word "it" in HdfsUserGuide.md > --- > > Key: HDFS-12560 > URL: https://issues.apache.org/jira/browse/HDFS-12560 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: fang zhenyi >Priority: Trivial > Attachments: HDFS-12560.001.patch > > > Since "it" is an extra word in fsck descrtiption, we should remove it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12560) Remove the extra word "it" in HdfsUserGuide.md
fang zhenyi created HDFS-12560: -- Summary: Remove the extra word "it" in HdfsUserGuide.md Key: HDFS-12560 URL: https://issues.apache.org/jira/browse/HDFS-12560 Project: Hadoop HDFS Issue Type: Improvement Reporter: fang zhenyi Priority: Trivial Since "it" is an extra word in fsck descrtiption, we should remove it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183684#comment-16183684 ] Surendra Singh Lilhore commented on HDFS-12291: --- bq. Not sure if we worry about create/delete inside the base dir during traversal here. This was important for re-encryption, where a bunch of race tests are added in TestReencryption. For SPS, newly created files anyway will use current policy, so they would be automatically satisfied. For Deleted, anyway we need not care about such files. Later while processing deleted inode, it will simply ignore in SPS daemon. bq. Heads up: there's a recent bug/improvement HDFS-12518 for HDFS-10899, will likely create some conflicts here. Issue itself is mostly for re-encryption though. We will take care this while doing rebase... > [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy > of all the files under the given dir > - > > Key: HDFS-12291 > URL: https://issues.apache.org/jira/browse/HDFS-12291 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12291-HDFS-10285-01.patch, > HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, > HDFS-12291-HDFS-10285-04.patch, HDFS-12291-HDFS-10285-05.patch, > HDFS-12291-HDFS-10285-06.patch, HDFS-12291-HDFS-10285-07.patch, > HDFS-12291-HDFS-10285-08.patch > > > For the given source path directory, presently SPS consider only the files > immediately under the directory(only one level of scanning) for satisfying > the policy. It WON’T do recursive directory scanning and then schedules SPS > tasks to satisfy the storage policy of all the files till the leaf node. > The idea of this jira is to discuss & implement an efficient recursive > directory iteration mechanism and satisfies storage policy for all the files > under the given directory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12540) Ozone: node status text reported by SCM is a bit confusing
[ https://issues.apache.org/jira/browse/HDFS-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183664#comment-16183664 ] Hadoop QA commented on HDFS-12540: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 12s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12540 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889407/HDFS-12540-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6552b1a8b7aa 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 7213e9a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21401/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21401/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/21401/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-11743) Revert the incompatible fsck reporting output in HDFS-7933 from branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183660#comment-16183660 ] Brahma Reddy Battula commented on HDFS-11743: - Oh,Yes.[~xiaochen] thanks for finding. Looks it's unintenational. > Revert the incompatible fsck reporting output in HDFS-7933 from branch-2.7 > -- > > Key: HDFS-11743 > URL: https://issues.apache.org/jira/browse/HDFS-11743 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Zhe Zhang >Assignee: Zhe Zhang >Priority: Blocker > Fix For: 2.7.4 > > Attachments: HDFS-11743-branch-2.7.00.patch, > HDFS-11743-branch-2.7.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183656#comment-16183656 ] Hadoop QA commented on HDFS-12497: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 42s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}163m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12497 | | JIRA Patch URL |
[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism
[ https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183641#comment-16183641 ] Xiaoyu Yao commented on HDFS-12387: --- [~anu], can you rebase the patch to latest feature branch? The patch v3 does not apply any more. > Ozone: Support Ratis as a first class replication mechanism > --- > > Key: HDFS-12387 > URL: https://issues.apache.org/jira/browse/HDFS-12387 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-12387-HDFS-7240.001.patch, > HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch > > > Ozone container layer supports pluggable replication policies. This JIRA > brings Apache Ratis based replication to Ozone. Apache Ratis is a java > implementation of Raft protocol. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly
[ https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183623#comment-16183623 ] Xiao Chen commented on HDFS-12458: -- Thanks [~jojochuang] for review. Not to be pedantic, but could you give an explicit +1? I also confirm the failed tests are not related. > TestReencryptionWithKMS fails regularly > --- > > Key: HDFS-12458 > URL: https://issues.apache.org/jira/browse/HDFS-12458 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, test >Affects Versions: 3.0.0-beta1 >Reporter: Konstantin Shvachko >Assignee: Xiao Chen > Labels: flaky-test > Attachments: HDFS-12458.01.patch, HDFS-12458.02.patch, > HDFS-12458.03.patch > > > {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12540) Ozone: node status text reported by SCM is a bit confusing
[ https://issues.apache.org/jira/browse/HDFS-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12540: --- Attachment: HDFS-12540-HDFS-7240.002.patch Resubmit the patch and hopefully this can trigger the jenkins job. > Ozone: node status text reported by SCM is a bit confusing > -- > > Key: HDFS-12540 > URL: https://issues.apache.org/jira/browse/HDFS-12540 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Trivial > Labels: ozoneMerge > Attachments: chillmode_status.png, HDFS-12540-HDFS-7240.001.patch, > HDFS-12540-HDFS-7240.002.patch, outchillmode_status.png > > > At present SCM UI displays node status like following > {noformat} > Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 > nodes have reported in. > {noformat} > this text is a bit confusing. UI retrieves status from > {{SCMNodeManager#getNodeStatus}}, related call is {{#getChillModeStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
[ https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183573#comment-16183573 ] Weiwei Yang commented on HDFS-12454: Hi [~vagarychen] bq. is there a particular reason why you prefer IP address rather than hostname? does hostname work here at all for multi-node? No, both hostname and IP works for multi-nodes setup. I wasn't meant to say *IP address* actually, the property name is {{ozone.ksm.address}}, that implies the format looks like {{ip:port}} or {{hostname:port}}, just like rest address properties in hadoop configuration files, e.g {noformat} dfs.datanode.address 0.0.0.0:9866 dfs.datanode.ipc.address 0.0.0.0:9867 {noformat} my suggestion was to let this property mandatory and let user configure a host/ip plus port for KSM. > Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work > -- > > Key: HDFS-12454 > URL: https://issues.apache.org/jira/browse/HDFS-12454 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12454-HDFS-7240.001.patch, > HDFS-12454-HDFS-7240.002.patch, HDFS-12454-HDFS-7240.003.patch, > HDFS-12454-HDFS-7240.004.patch, HDFS-12454-HDFS-7240.005.patch > > > In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there > are a few issues with it. > 1. > {code} > > ozone.scm.block.client.address > scm.hadoop.apache.org > > > ozone.ksm.address > ksm.hadoop.apache.org > > {code} > The value should be an address instead. > 2. > {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires > {{ozone.scm.client.address}} to be set, which is missing from this sample > file. Missing this config will seem to cause failure on starting datanode. > 3. > {code} > > ozone.scm.names > scm.hadoop.apache.org > > {code} > This value did not make much sense to, I found the comment in > {{ScmConfigKeys}} that says > {code} > // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT. > // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7: > {code} > So maybe we should write something like scm1 as value here. > 4. I'm not entirely sure about this, but > [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says > {code} > > ozone.handler.type > local > > {code} > is also part of minimum setting, do we need to add this [~anu]? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly
[ https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183567#comment-16183567 ] Wei-Chiu Chuang commented on HDFS-12458: Patch 03 LGTM. It is a test-only patch and therefore the failed tests unrelated. > TestReencryptionWithKMS fails regularly > --- > > Key: HDFS-12458 > URL: https://issues.apache.org/jira/browse/HDFS-12458 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, test >Affects Versions: 3.0.0-beta1 >Reporter: Konstantin Shvachko >Assignee: Xiao Chen > Labels: flaky-test > Attachments: HDFS-12458.01.patch, HDFS-12458.02.patch, > HDFS-12458.03.patch > > > {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183560#comment-16183560 ] SammiChen commented on HDFS-12497: -- Thanks Huafeng for taking over the task! > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Labels: flaky-test, hdfs-ec-3.0-must-do > Attachments: HDFS-12497.001.patch > > > We disabled this suite of tests in HDFS-12417 since they were very flaky. We > should fix these tests and re-enable them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HDFS-12497: Status: Patch Available (was: Open) > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Labels: flaky-test, hdfs-ec-3.0-must-do > Attachments: HDFS-12497.001.patch > > > We disabled this suite of tests in HDFS-12417 since they were very flaky. We > should fix these tests and re-enable them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang reassigned HDFS-12497: --- Assignee: Huafeng Wang (was: SammiChen) > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Labels: flaky-test, hdfs-ec-3.0-must-do > Attachments: HDFS-12497.001.patch > > > We disabled this suite of tests in HDFS-12417 since they were very flaky. We > should fix these tests and re-enable them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests
[ https://issues.apache.org/jira/browse/HDFS-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HDFS-12497: Attachment: HDFS-12497.001.patch > Re-enable TestDFSStripedOutputStreamWithFailure tests > - > > Key: HDFS-12497 > URL: https://issues.apache.org/jira/browse/HDFS-12497 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Labels: flaky-test, hdfs-ec-3.0-must-do > Attachments: HDFS-12497.001.patch > > > We disabled this suite of tests in HDFS-12417 since they were very flaky. We > should fix these tests and re-enable them. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183529#comment-16183529 ] Hadoop QA commented on HDFS-12543: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 29s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 17s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12543 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889383/HDFS-12543-HDFS-7240.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 91578cf33e2a 3.13.0-117-generic #164-Ubuntu SMP
[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183522#comment-16183522 ] Hadoop QA commented on HDFS-12396: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 22s{color} | {color:orange} root: The patch generated 24 new + 194 unchanged - 2 fixed = 218 total (was 196) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 12s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 43s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}235m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12396 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889365/HDFS-12396.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 14fba1ab080d 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git
[jira] [Commented] (HDFS-12552) Use slf4j instead of log4j in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183512#comment-16183512 ] Hadoop QA commented on HDFS-12552: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 9 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 47s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 2 new + 392 unchanged - 19 fixed = 394 total (was 411) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 42s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}138m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12552 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889382/HDFS-12552.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a7c284733972 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87db8d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21398/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21398/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21398/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
[jira] [Commented] (HDFS-12453) TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
[ https://issues.apache.org/jira/browse/HDFS-12453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183472#comment-16183472 ] Hadoop QA commented on HDFS-12453: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 15 unchanged - 2 fixed = 15 total (was 17) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}145m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12453 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889368/HDFS-12453.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 59a3e9282f05 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87db8d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21397/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21397/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21397/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was
[jira] [Commented] (HDFS-11743) Revert the incompatible fsck reporting output in HDFS-7933 from branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183464#comment-16183464 ] Xiao Chen commented on HDFS-11743: -- Hi [~zhz], Was reverting {{TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt}} intentional? It looks unrelated to HDFS-7933 and pretty contained by HDFS-11445. I just tried to run it on latest branch-2.7, and it still passes. > Revert the incompatible fsck reporting output in HDFS-7933 from branch-2.7 > -- > > Key: HDFS-11743 > URL: https://issues.apache.org/jira/browse/HDFS-11743 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Zhe Zhang >Assignee: Zhe Zhang >Priority: Blocker > Fix For: 2.7.4 > > Attachments: HDFS-11743-branch-2.7.00.patch, > HDFS-11743-branch-2.7.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183446#comment-16183446 ] Hadoop QA commented on HDFS-12411: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 53s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 11s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 46s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 2s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}197m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.cblock.TestBufferManager | | | hadoop.ozone.scm.node.TestNodeManager | | | hadoop.ozone.container.common.TestDatanodeStateMachine | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12411 | | JIRA Patch URL |
[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly
[ https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183440#comment-16183440 ] Hadoop QA commented on HDFS-12458: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 38s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 59s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}171m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12458 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889357/HDFS-12458.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 505df681da8e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87db8d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21395/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21395/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183409#comment-16183409 ] Hadoop QA commented on HDFS-12553: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 70 new + 300 unchanged - 12 fixed = 370 total (was 312) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 42s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}170m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12553 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889353/HDFS-12553.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 8da10ba49e8b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87db8d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21392/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | |
[jira] [Commented] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183408#comment-16183408 ] Hadoop QA commented on HDFS-12554: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 8s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 8s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}155m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12554 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889355/HDFS-12554-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6c353f77ea5c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 056a978 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21394/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21394/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/21394/artifact/patchprocess/patch-asflicense-problems.txt | | modules |
[jira] [Updated] (HDFS-12458) TestReencryptionWithKMS fails regularly
[ https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12458: - Labels: flaky-test (was: ) > TestReencryptionWithKMS fails regularly > --- > > Key: HDFS-12458 > URL: https://issues.apache.org/jira/browse/HDFS-12458 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, test >Affects Versions: 3.0.0-beta1 >Reporter: Konstantin Shvachko >Assignee: Xiao Chen > Labels: flaky-test > Attachments: HDFS-12458.01.patch, HDFS-12458.02.patch, > HDFS-12458.03.patch > > > {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12554: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thanks [~ajakumar] for the contribution and all for the discussions. I've commit the fix to the feature branch. > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183384#comment-16183384 ] Xiaoyu Yao commented on HDFS-12554: --- I can't repro the failures in hadoop.ozone.scm.node.TestQueryNode in my local setup with the patch. I will commit the patch shortly. > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12543: -- Attachment: HDFS-12543-HDFS-7240.005.patch Post v005 patch to resolve findbugs and javadoc warning. > Ozone : allow create key without specifying size > > > Key: HDFS-12543 > URL: https://issues.apache.org/jira/browse/HDFS-12543 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Labels: ozoneMerge > Attachments: HDFS-12543-HDFS-7240.001.patch, > HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, > HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch > > > Currently when creating a key, it is required to specify the total size of > the key. This makes it inconvenient for the case where a key is created and > data keeps coming and being appended. This JIRA is remove the requirement of > specifying the size on key creation, and allows appending to the key > indefinitely. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12453) TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
[ https://issues.apache.org/jira/browse/HDFS-12453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183378#comment-16183378 ] Xiao Chen commented on HDFS-12453: -- Thanks [~eddyxu] for the fix and the chats offline. Change LGTM. Nit: I think we could use {{MultipleIOException}} instead of {{throw new IOException(exceptions.get(0).getCause());}} +1 from me pending that and precommits. > TestDataNodeHotSwapVolumes fails in trunk Jenkins runs > -- > > Key: HDFS-12453 > URL: https://issues.apache.org/jira/browse/HDFS-12453 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Arpit Agarwal >Assignee: Lei (Eddy) Xu >Priority: Critical > Labels: flaky-test > Attachments: HDFS-12453.00.patch, TestLogs.txt > > > TestDataNodeHotSwapVolumes fails occasionally with the following error (see > comment). Ran it ~10 times locally and it passed every time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12552) Use slf4j instead of log4j in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12552: -- Attachment: HDFS-12552.02.patch fixed checkstyle issue in patch v2. > Use slf4j instead of log4j in FSNamesystem > -- > > Key: HDFS-12552 > URL: https://issues.apache.org/jira/browse/HDFS-12552 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12552.01.patch, HDFS-12552.02.patch > > > FileNamesystem is still using log4j dependencies. We should move those to > slf4j, as most of the methods using log4j are deprecated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183373#comment-16183373 ] Hadoop QA commented on HDFS-12554: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 44s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 29s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}157m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.ozone.scm.node.TestQueryNode | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12554 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889343/HDFS-12554-HDFS-7240.001.patch | | Optional Tests | asflicense mvnsite compile javac javadoc mvninstall unit shadedclient findbugs checkstyle | | uname | Linux caa989c407ff 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 056a978 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21390/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21390/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/21390/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | |
[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183368#comment-16183368 ] Hadoop QA commented on HDFS-12543: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 48s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 14s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 53s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 14s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}174m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Invocation of toString on objectKey in org.apache.hadoop.ozone.ksm.KeyManagerImpl.openKey(KsmKeyArgs) At KeyManagerImpl.java:in org.apache.hadoop.ozone.ksm.KeyManagerImpl.openKey(KsmKeyArgs) At KeyManagerImpl.java:[line 219] | | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | |
[jira] [Commented] (HDFS-12511) Ozone: Add tags to config
[ https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183360#comment-16183360 ] Hadoop QA commented on HDFS-12511: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 48s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 12s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 31s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 3s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 53s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}208m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.conf.TestCommonConfigurationFields | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.hdfs.TestLocalDFS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12511 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889335/HDFS-12511-HDFS-7240.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 08e068fefeb3 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64
[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183355#comment-16183355 ] Hadoop QA commented on HDFS-12543: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 26s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 46s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Invocation of toString on objectKey in org.apache.hadoop.ozone.ksm.KeyManagerImpl.openKey(KsmKeyArgs) At KeyManagerImpl.java:in org.apache.hadoop.ozone.ksm.KeyManagerImpl.openKey(KsmKeyArgs) At KeyManagerImpl.java:[line 219] | | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.ozone.scm.TestContainerSQLCli | | | hadoop.ozone.container.common.TestDatanodeStateMachine | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | |
[jira] [Commented] (HDFS-12552) Use slf4j instead of log4j in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183347#comment-16183347 ] Chen Liang commented on HDFS-12552: --- [~ajayydv] sounds good to me, then could you please fix the checkstyle issue? +1 with it fixed. > Use slf4j instead of log4j in FSNamesystem > -- > > Key: HDFS-12552 > URL: https://issues.apache.org/jira/browse/HDFS-12552 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12552.01.patch > > > FileNamesystem is still using log4j dependencies. We should move those to > slf4j, as most of the methods using log4j are deprecated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12552) Use slf4j instead of log4j in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183334#comment-16183334 ] Ajay Kumar commented on HDFS-12552: --- [~vagarychen], thanks for review. There are few classes along with {{LeaseManager}} which are still on log4j. I am planning to fix them separately. (for tracking purpose.) > Use slf4j instead of log4j in FSNamesystem > -- > > Key: HDFS-12552 > URL: https://issues.apache.org/jira/browse/HDFS-12552 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12552.01.patch > > > FileNamesystem is still using log4j dependencies. We should move those to > slf4j, as most of the methods using log4j are deprecated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12552) Use slf4j instead of log4j in FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183319#comment-16183319 ] Chen Liang commented on HDFS-12552: --- Thanks [~ajayydv] for taking care of this! v01 patch LGTM overall. Seems {{LeaseManager}} is not using slf4j either. Do you mind fixing this as well? > Use slf4j instead of log4j in FSNamesystem > -- > > Key: HDFS-12552 > URL: https://issues.apache.org/jira/browse/HDFS-12552 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12552.01.patch > > > FileNamesystem is still using log4j dependencies. We should move those to > slf4j, as most of the methods using log4j are deprecated. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12453) TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
[ https://issues.apache.org/jira/browse/HDFS-12453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12453: - Status: Patch Available (was: Open) > TestDataNodeHotSwapVolumes fails in trunk Jenkins runs > -- > > Key: HDFS-12453 > URL: https://issues.apache.org/jira/browse/HDFS-12453 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Arpit Agarwal >Assignee: Lei (Eddy) Xu >Priority: Critical > Labels: flaky-test > Attachments: HDFS-12453.00.patch, TestLogs.txt > > > TestDataNodeHotSwapVolumes fails occasionally with the following error (see > comment). Ran it ~10 times locally and it passed every time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12453) TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
[ https://issues.apache.org/jira/browse/HDFS-12453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12453: - Attachment: HDFS-12453.00.patch Change the {{stream}} thread to block on barrier, and removing volumes that are located by the block, to improve the deterministic of the test. Ran tests like following and passed with this patch, and fail on trunk. {code} for i in `seq 50`; do mvn test -Dtest=TestDataNodeHotSwapVolumes done {code} > TestDataNodeHotSwapVolumes fails in trunk Jenkins runs > -- > > Key: HDFS-12453 > URL: https://issues.apache.org/jira/browse/HDFS-12453 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Arpit Agarwal >Assignee: Lei (Eddy) Xu >Priority: Critical > Labels: flaky-test > Attachments: HDFS-12453.00.patch, TestLogs.txt > > > TestDataNodeHotSwapVolumes fails occasionally with the following error (see > comment). Ran it ~10 times locally and it passed every time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183273#comment-16183273 ] Xiaoyu Yao commented on HDFS-12554: --- Thanks [~ajakumar] for working on this. Patch LGTM, +1 pending Jenkins. > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12396: -- Status: Patch Available (was: Open) > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HDFS-12396.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.
[ https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12396: -- Attachment: HDFS-12396.001.patch Attached an initial draft for the patch. Lazy to run all the tests and checkstyle issues. > Webhdfs file system should get delegation token from kms provider. > -- > > Key: HDFS-12396 > URL: https://issues.apache.org/jira/browse/HDFS-12396 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HDFS-12396.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12268) Ozone: Add metrics for pending storage container requests
[ https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12268: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Ozone: Add metrics for pending storage container requests > - > > Key: HDFS-12268 > URL: https://issues.apache.org/jira/browse/HDFS-12268 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: ozoneMerge > Attachments: HDFS-12268-HDFS-7240.001.patch, > HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, > HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch, > HDFS-12268-HDFS-7240.006.patch, HDFS-12268-HDFS-7240.007.patch, > HDFS-12268-HDFS-7240.008.patch, HDFS-12268-HDFS-7240.009.patch > > > As storage container async interface has been supported after HDFS-11580, we > need to keep an eye on the queue depth of pending container requests. It can > help us better found if there are some performance problems. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests
[ https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183240#comment-16183240 ] Chen Liang commented on HDFS-12268: --- v002 patch LGTM, also did some simple checks on the local node setup, seems all good. Will commit shortly. > Ozone: Add metrics for pending storage container requests > - > > Key: HDFS-12268 > URL: https://issues.apache.org/jira/browse/HDFS-12268 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: ozoneMerge > Attachments: HDFS-12268-HDFS-7240.001.patch, > HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, > HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch, > HDFS-12268-HDFS-7240.006.patch, HDFS-12268-HDFS-7240.007.patch, > HDFS-12268-HDFS-7240.008.patch, HDFS-12268-HDFS-7240.009.patch > > > As storage container async interface has been supported after HDFS-11580, we > need to keep an eye on the queue depth of pending container requests. It can > help us better found if there are some performance problems. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12559) Ozone: TestContainerPersistence#testListContainer sometimes timeout
[ https://issues.apache.org/jira/browse/HDFS-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12559: -- Description: This test creates 1000 containers and reads them back 5 containers at a time and verifies that we did get back all containers. On my laptop, it takes 11s to finish but on some slow Jenkins machine this could take longer time. Current the whole test suite {{TestContainerPersistence}} has a timeout rule of 5 min. Need to understand why RocksDB open is taking such a long time as shown in the stack below. {code} java.lang.Exception: test timed out after 30 milliseconds at org.rocksdb.RocksDB.open(Native Method) at org.rocksdb.RocksDB.open(RocksDB.java:231) at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:64) at org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:94) at org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.createMetadata(ContainerUtils.java:254) at org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.writeContainerInfo(ContainerManagerImpl.java:396) at org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:329) at org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testListContainer(TestContainerPersistence.java:341) {code} was: his test creates 1000 containers and reads them back 5 containers at a time and verifies that we did get back all containers. On my laptop, it takes 11s to finish but on some slow Jenkins machine this could take longer time. Current the whole test suite {{TestContainerPersistence}} has a timeout rule of 5 min. Need to understand why RocksDB open is taking such a long time as shown in the stack below. {code} java.lang.Exception: test timed out after 30 milliseconds at org.rocksdb.RocksDB.open(Native Method) at org.rocksdb.RocksDB.open(RocksDB.java:231) at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:64) at org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:94) at org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.createMetadata(ContainerUtils.java:254) at org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.writeContainerInfo(ContainerManagerImpl.java:396) at org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:329) at org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testListContainer(TestContainerPersistence.java:341) {code} > Ozone: TestContainerPersistence#testListContainer sometimes timeout > --- > > Key: HDFS-12559 > URL: https://issues.apache.org/jira/browse/HDFS-12559 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > > This test creates 1000 containers and reads them back 5 containers at a time > and verifies that we did get back all containers. On my laptop, it takes 11s > to finish but on some slow Jenkins machine this could take longer time. > Current the whole test suite {{TestContainerPersistence}} has a timeout rule > of 5 min. Need to understand why RocksDB open is taking such a long time as > shown in the stack below. > {code} > java.lang.Exception: test timed out after 30 milliseconds > at org.rocksdb.RocksDB.open(Native Method) > at org.rocksdb.RocksDB.open(RocksDB.java:231) > at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:64) > at > org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:94) > at > org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.createMetadata(ContainerUtils.java:254) > at > org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.writeContainerInfo(ContainerManagerImpl.java:396) > at > org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:329) > at > org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testListContainer(TestContainerPersistence.java:341) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12458) TestReencryptionWithKMS fails regularly
[ https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183211#comment-16183211 ] Xiao Chen commented on HDFS-12458: -- Thanks for the review [~jojochuang]. Correct, strictly speaking waitClusterUp should happen after waitActive. (Or no waitActive is needed at all, assuming safemode requires all DN up). Updated the patch to call waitClusterUp after waitActive to remove confusion. bq. why did you replace waitActive with waitClusterUp, rather than keeping waitActive? Because this is already done as part of the minicluster's restartNamenodes method. {code:title=MiniDFSCluster#restartNameNodes} public synchronized void restartNameNodes() throws IOException { for (int i = 0; i < namenodes.size(); i++) { restartNameNode(i, false); } waitActive(); } {code} > TestReencryptionWithKMS fails regularly > --- > > Key: HDFS-12458 > URL: https://issues.apache.org/jira/browse/HDFS-12458 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, test >Affects Versions: 3.0.0-beta1 >Reporter: Konstantin Shvachko >Assignee: Xiao Chen > Attachments: HDFS-12458.01.patch, HDFS-12458.02.patch, > HDFS-12458.03.patch > > > {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12458) TestReencryptionWithKMS fails regularly
[ https://issues.apache.org/jira/browse/HDFS-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12458: - Attachment: HDFS-12458.03.patch > TestReencryptionWithKMS fails regularly > --- > > Key: HDFS-12458 > URL: https://issues.apache.org/jira/browse/HDFS-12458 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption, test >Affects Versions: 3.0.0-beta1 >Reporter: Konstantin Shvachko >Assignee: Xiao Chen > Attachments: HDFS-12458.01.patch, HDFS-12458.02.patch, > HDFS-12458.03.patch > > > {{TestReencryptionWithKMS}} fails pretty often on Jenkins. Should fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12559) Ozone: TestContainerPersistence#testListContainer sometimes timeout
Xiaoyu Yao created HDFS-12559: - Summary: Ozone: TestContainerPersistence#testListContainer sometimes timeout Key: HDFS-12559 URL: https://issues.apache.org/jira/browse/HDFS-12559 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Xiaoyu Yao Assignee: Ajay Kumar his test creates 1000 containers and reads them back 5 containers at a time and verifies that we did get back all containers. On my laptop, it takes 11s to finish but on some slow Jenkins machine this could take longer time. Current the whole test suite {{TestContainerPersistence}} has a timeout rule of 5 min. Need to understand why RocksDB open is taking such a long time as shown in the stack below. {code} java.lang.Exception: test timed out after 30 milliseconds at org.rocksdb.RocksDB.open(Native Method) at org.rocksdb.RocksDB.open(RocksDB.java:231) at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:64) at org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:94) at org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.createMetadata(ContainerUtils.java:254) at org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.writeContainerInfo(ContainerManagerImpl.java:396) at org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:329) at org.apache.hadoop.ozone.container.common.impl.TestContainerPersistence.testListContainer(TestContainerPersistence.java:341) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12554: -- Status: Patch Available (was: Open) [~vagarychen],[~xyao] thanks for input. Submitting patch for #2. > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12554: -- Attachment: HDFS-12554-HDFS-7240.001.patch > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12554: -- Attachment: (was: HDFS-12554-HDFS-7240.001.patch) > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Attachment: HDFS-12411-HDFS-7240.004.patch More Jenkins fix. > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183186#comment-16183186 ] Bharat Viswanadham commented on HDFS-12553: --- Attached the v03 patch. Fixed the Unit Test failures. > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12553: -- Attachment: (was: HDFS-12553.03.patch) > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12553: -- Attachment: HDFS-12553.03.patch > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12553: -- Attachment: HDFS-12553.03.patch > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12554: -- Status: Open (was: Patch Available) > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12543: -- Attachment: HDFS-12543-HDFS-7240.004.patch post v004 patch with some minor improvement on v003 patch. > Ozone : allow create key without specifying size > > > Key: HDFS-12543 > URL: https://issues.apache.org/jira/browse/HDFS-12543 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Labels: ozoneMerge > Attachments: HDFS-12543-HDFS-7240.001.patch, > HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, > HDFS-12543-HDFS-7240.004.patch > > > Currently when creating a key, it is required to specify the total size of > the key. This makes it inconvenient for the case where a key is created and > data keeps coming and being appended. This JIRA is remove the requirement of > specifying the size on key creation, and allows appending to the key > indefinitely. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12554: -- Status: Patch Available (was: Open) > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183132#comment-16183132 ] Hadoop QA commented on HDFS-12411: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 3 unchanged - 0 fixed = 7 total (was 3) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 24s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}211m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.hdfs.TestPread | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.ozone.scm.node.TestNodeManager | | | hadoop.ozone.container.replication.TestContainerReplicationManager | | | hadoop.ozone.container.common.TestDatanodeStateMachine | | Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\
[jira] [Updated] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12554: -- Attachment: HDFS-12554-HDFS-7240.001.patch > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > Attachments: HDFS-12554-HDFS-7240.001.patch > > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183126#comment-16183126 ] Xiaoyu Yao commented on HDFS-12554: --- Thanks [~vagarychen] for looking into this. #2 sounds good to me. > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size
[ https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-12543: -- Attachment: HDFS-12543-HDFS-7240.003.patch Rebase with v003 patch. > Ozone : allow create key without specifying size > > > Key: HDFS-12543 > URL: https://issues.apache.org/jira/browse/HDFS-12543 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang > Labels: ozoneMerge > Attachments: HDFS-12543-HDFS-7240.001.patch, > HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch > > > Currently when creating a key, it is required to specify the total size of > the key. This makes it inconvenient for the case where a key is created and > data keeps coming and being appended. This JIRA is remove the requirement of > specifying the size on key creation, and allows appending to the key > indefinitely. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12554) Ozone: Fix TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration
[ https://issues.apache.org/jira/browse/HDFS-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183108#comment-16183108 ] Chen Liang commented on HDFS-12554: --- Looks like the original test expect datanode to shutdown, if datanode.id is set to empty string "". While in HDFS-12454, it was changed that either datanode.id is not set, or is set to empty string, the default value will be picked up. More specifically, in {{OzoneUtils#getDatanodeIDPath}}, this check {{Strings.isNullOrEmpty(dataNodeIDPath)}} treats null and empty string the same way. To fix this, either: 1. do pick up default when datanode.id is set to empty string. in this case we can remove from the test the line Xiaoyu mentioned in description 2. do not pick up default if datanode.id is set to empty string. in this case we can just change that {{Strings.isNullOrEmpty(dataNodeIDPath)}} check to {{dataNodeIDPath == null}} and thus only set to default when datanode.id is not set at all. If datanode.id is set to empty string, {{OzoneUtils#getDatanodeIDPath}} will just return this empty string to caller and datanode state will transit to shutdown later I guess #2 makes more sense. > Ozone: Fix > TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration > > > Key: HDFS-12554 > URL: https://issues.apache.org/jira/browse/HDFS-12554 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Ajay Kumar > Labels: ozoneMerge > > {{hadoop.ozone.container.common.TestDatanodeStateMachine#testDatanodeStateMachineWithInvalidConfiguration}} > failure is related to this patch. > Invalid ozone.scm.datanode.id like below in the failed test used to prevent > datanode from running and now it is allowed. Please update the unit test and > the OzoneGetStarted.md file correspondingly. > {code} > confList.add(Maps.immutableEntry( > ScmConfigKeys.OZONE_SCM_DATANODE_ID, "")); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12399) Improve erasure coding codec framework adding more unit tests
[ https://issues.apache.org/jira/browse/HDFS-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183098#comment-16183098 ] Hanisha Koneru edited comment on HDFS-12399 at 9/27/17 7:09 PM: Thanks for the patch, [~Sammi]. A few comments: - The intent is that enable policy should definitely fail for prohibited policies, right? If so, then the code block below would not assert that an exception is thrown while enabling a prohibited policy. This only checks the exception message, if at all an exception is thrown. {code} // PROHIBITED policy cannot be enabled or disabled try { fs.enableErasureCodingPolicy(newPolicy.getName()); } catch (IOException e) { assertExceptionContains("because its codec is not supported", e); } {code} There should be another assert right after the enable call. {code} try { fs.enableErasureCodingPolicy(newPolicy.getName()); assertFalse("Enabling prohibited erasure coding should fail", true); } catch (IOException e) { {code} - In _CodecRegistry#removeCodec()_, {{coderNameCompactMap}} should also be cleared before putting back values to avoid double entries. Not sure if I am missing something here, but why not just remove the {{targetCodecName}} key from _coderNameMap_ and _coderNameCompactMap_, as being done for _coderMap_. Instead of clearing the map and putting back the values. - {{AddECPolicyResponse}} should be renamed to {{AddErasureCodingPolicyResponse}} after HDFS-12447. was (Author: hanishakoneru): Thanks for the patch, [~Sammi]. A few comments: - The intent is that enable policy should definitely fail for prohibited policies, right? If so, then the code block below would not assert that an exception is thrown while enabling a prohibited policy. This only checks the exception message, if at all an exception is thrown. {code} // PROHIBITED policy cannot be enabled or disabled try { fs.enableErasureCodingPolicy(newPolicy.getName()); } catch (IOException e) { assertExceptionContains("because its codec is not supported", e); } {code} There should be another assert right after the enable call. {code} try { fs.enableErasureCodingPolicy(newPolicy.getName()); assertFalse("Enabling prohibited erasure coding should fail", true); } catch (IOException e) { {code} - In _CodecRegistry#removeCodec()_, {{coderNameCompactMap}} should also be cleared before putting back values to avoid double entries. Not sure if I am missing something here, but why not just remove the {{targetCodecName}} key from _coderNameMap_ and _coderNameCompactMap_, as being done for _coderMap_. Instead of clearing the map and putting back the values. > Improve erasure coding codec framework adding more unit tests > -- > > Key: HDFS-12399 > URL: https://issues.apache.org/jira/browse/HDFS-12399 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12399.000.patch > > > Improve erasure coding codec through add more unit tests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters
[ https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12469: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) [~shashikant] Thanks for the review comments. [~elek] Thank you for the contribution. I have committed this to the feature branch. While I was testing I ran into some sort of errors with `scale datanode=3`, Can you please check if it is working as expected on your end. If not, Please feel free to open another JIRA. > Ozone: Create docker-compose definition to easily test real clusters > > > Key: HDFS-12469 > URL: https://issues.apache.org/jira/browse/HDFS-12469 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Elek, Marton >Assignee: Elek, Marton > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12469-HDFS-7240.001.patch, > HDFS-12469-HDFS-7240.002.patch, HDFS-12469-HDFS-7240.WIP1.patch, > HDFS-12469-HDFS-7240.WIP2.patch > > > The goal here is to create a docker-compose definition for ozone > pseudo-cluster with docker (one component per container). > Ideally after a full build the ozone cluster could be started easily with > after a simple docker-compose up command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12399) Improve erasure coding codec framework adding more unit tests
[ https://issues.apache.org/jira/browse/HDFS-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183098#comment-16183098 ] Hanisha Koneru commented on HDFS-12399: --- Thanks for the patch, [~Sammi]. A few comments: - The intent is that enable policy should definitely fail for prohibited policies, right? If so, then the code block below would not assert that an exception is thrown while enabling a prohibited policy. This only checks the exception message, if at all an exception is thrown. {code} // PROHIBITED policy cannot be enabled or disabled try { fs.enableErasureCodingPolicy(newPolicy.getName()); } catch (IOException e) { assertExceptionContains("because its codec is not supported", e); } {code} There should be another assert right after the enable call. {code} try { fs.enableErasureCodingPolicy(newPolicy.getName()); assertFalse("Enabling prohibited erasure coding should fail", true); } catch (IOException e) { {code} - In _CodecRegistry#removeCodec()_, {{coderNameCompactMap}} should also be cleared before putting back values to avoid double entries. Not sure if I am missing something here, but why not just remove the {{targetCodecName}} key from _coderNameMap_ and _coderNameCompactMap_, as being done for _coderMap_. Instead of clearing the map and putting back the values. > Improve erasure coding codec framework adding more unit tests > -- > > Key: HDFS-12399 > URL: https://issues.apache.org/jira/browse/HDFS-12399 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > Attachments: HDFS-12399.000.patch > > > Improve erasure coding codec through add more unit tests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
[ https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183080#comment-16183080 ] Chen Liang commented on HDFS-12454: --- [~anu] bq. re-confirm that we cannot have a DNS:PORT By DNS, did you mean hostname? I have only tested the config in local one machine environment, something like localhost:port did work for me. I haven't tested this config for multi-machine setup, I don't see any reason why hostname won't work for multi-node though. Earlier there was a comment bq. I think we do need ozone.ksm.address, let user explicitly set this property helps them to understand the primitive services' location. And this should be an address, not a hostname. [~cheersyang] is there a particular reason why you prefer IP address rather than hostname? does hostname work here at all for multi-node? > Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work > -- > > Key: HDFS-12454 > URL: https://issues.apache.org/jira/browse/HDFS-12454 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12454-HDFS-7240.001.patch, > HDFS-12454-HDFS-7240.002.patch, HDFS-12454-HDFS-7240.003.patch, > HDFS-12454-HDFS-7240.004.patch, HDFS-12454-HDFS-7240.005.patch > > > In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there > are a few issues with it. > 1. > {code} > > ozone.scm.block.client.address > scm.hadoop.apache.org > > > ozone.ksm.address > ksm.hadoop.apache.org > > {code} > The value should be an address instead. > 2. > {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires > {{ozone.scm.client.address}} to be set, which is missing from this sample > file. Missing this config will seem to cause failure on starting datanode. > 3. > {code} > > ozone.scm.names > scm.hadoop.apache.org > > {code} > This value did not make much sense to, I found the comment in > {{ScmConfigKeys}} that says > {code} > // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT. > // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7: > {code} > So maybe we should write something like scm1 as value here. > 4. I'm not entirely sure about this, but > [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says > {code} > > ozone.handler.type > local > > {code} > is also part of minimum setting, do we need to add this [~anu]? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12511) Ozone: Add tags to config
[ https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12511: -- Attachment: HDFS-12511-HDFS-7240.03.patch Added 3 new enum values {{STORAGE,PIPELINE,STANDALONE} to {{OzonePropertyTag}} > Ozone: Add tags to config > - > > Key: HDFS-12511 > URL: https://issues.apache.org/jira/browse/HDFS-12511 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12511-HDFS-7240.01.patch, > HDFS-12511-HDFS-7240.02.patch, HDFS-12511-HDFS-7240.03.patch > > > Add tags to ozone config: > Example: > {code} > > ozone.ksm.handler.count.key > 200 > OZONE,PERFORMANCE,KSM > > The number of RPC handler threads for each KSM service endpoint. > > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12484) Undefined -expunge behavior after 2.8
[ https://issues.apache.org/jira/browse/HDFS-12484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183059#comment-16183059 ] Xiaoyu Yao commented on HDFS-12484: --- Thanks for the heads up [~jojochuang], the proposed solution should work. Have you considered adding a new non-privilege listEncryptionZone API? This allows user to retrieve all the encryption zones that he/she is allowed to access? This way, we can provide a better user experience when expunging deleted files from encryption zone. > Undefined -expunge behavior after 2.8 > - > > Key: HDFS-12484 > URL: https://issues.apache.org/jira/browse/HDFS-12484 > Project: Hadoop HDFS > Issue Type: Bug > Components: fs >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-12484.001.patch, HDFS-12484.002.patch > > > (Rewrote the description to reflect the actual behavior) > Hadoop 2.8 added a feature to support trash inside encryption zones, which is > a great feature to have. > However, when it comes to -expunge, the behavior is not well defined. A > superuser invoking -expunge removes files under all encryption zone trash > directory belonging to the user. On the other hand, because > listEncryptionZones requires superuser permission, a non-privileged user > invoking -expunge can removes under home directory, but not under encryption > zones. > Moreover, the command prints a scary warning message that looks annoying. > {noformat} > 2017-09-21 01:22:44,744 [main] WARN hdfs.DFSClient > (DistributedFileSystem.java:getTrashRoots(2795)) - Cannot get all encrypted > trash roots > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Access denied for user user. Superuser privilege is required > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSuperuserPrivilege(FSPermissionChecker.java:130) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:4556) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listEncryptionZones(FSNamesystem.java:7048) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listEncryptionZones(NameNodeRpcServer.java:2053) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listEncryptionZones(ClientNamenodeProtocolServerSideTranslatorPB.java:1477) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1490) > at org.apache.hadoop.ipc.Client.call(Client.java:1436) > at org.apache.hadoop.ipc.Client.call(Client.java:1346) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy25.listEncryptionZones(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listEncryptionZones(ClientNamenodeProtocolTranslatorPB.java:1510) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) > at com.sun.proxy.$Proxy29.listEncryptionZones(Unknown Source) > at >
[jira] [Commented] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode
[ https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183051#comment-16183051 ] Chen Liang commented on HDFS-12467: --- Thanks for the remind [~anu]. Thanks [~nandakumar131] for the update. v002 patch LGTM, just noticed one more thing, sorry... This check {{if(inStartupChillMode.get() && totalNodes.get() >= chillModeNodeCount)}} would it be better to use {{getMinimumChillModeNodes()}} instead of {{chillModeNodeCount}}? (like it is in current code). Although for now they are effectively same thing, but if later we make change to {{getMinimumChillModeNodes()}} for some reason and expect it will change chill mode check, we may get unexpected behaviour. > Ozone: SCM: NodeManager should log when it comes out of chill mode > -- > > Key: HDFS-12467 > URL: https://issues.apache.org/jira/browse/HDFS-12467 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12467-HDFS-7240.000.patch, > HDFS-12467-HDFS-7240.001.patch, HDFS-12467-HDFS-7240.002.patch > > > {{NodeManager}} should add a log message when it comes out of chill mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12485) expunge may not remove trash from encryption zone
[ https://issues.apache.org/jira/browse/HDFS-12485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183047#comment-16183047 ] Xiaoyu Yao commented on HDFS-12485: --- Good catch [~jojochuang], patch looks good to me. +1. > expunge may not remove trash from encryption zone > - > > Key: HDFS-12485 > URL: https://issues.apache.org/jira/browse/HDFS-12485 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-12485.001.patch > > > This is related to HDFS-12484, but turns out that even if I have super user > permission, -expunge may not remove trash either. > If I log into Linux as root, and then login as the superuser h...@example.com > {noformat} > [root@nightly511-1 ~]# hdfs dfs -rm /scale/b > 17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to > trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b > [root@nightly511-1 ~]# hdfs dfs -expunge > 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: > TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash > 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: > TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash > 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: > /user/hdfs/.Trash/170918143916 > 17/09/18 15:21:59 INFO fs.TrashPolicyDefault: > TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash > [root@nightly511-1 ~]# hdfs dfs -ls > hdfs://ns1/scale/.Trash/hdfs/Current/scale/b > -rw-r--r-- 3 hdfs systest 0 2017-09-18 15:21 > hdfs://ns1/scale/.Trash/hdfs/Current/scale/b > {noformat} > expunge does not remove trash under /scale, because it does not know I am > 'hdfs' user. > {code:title=DistributedFileSystem#getTrashRoots} > Path ezTrashRoot = new Path(it.next().getPath(), > FileSystem.TRASH_PREFIX); > if (!exists(ezTrashRoot)) { > continue; > } > if (allUsers) { > for (FileStatus candidate : listStatus(ezTrashRoot)) { > if (exists(candidate.getPath())) { > ret.add(candidate); > } > } > } else { > Path userTrash = new Path(ezTrashRoot, System.getProperty( > "user.name")); --> bug > try { > ret.add(getFileStatus(userTrash)); > } catch (FileNotFoundException ignored) { > } > } > {code} > It should use UGI for user name, rather than system login user name. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12453) TestDataNodeHotSwapVolumes fails in trunk Jenkins runs
[ https://issues.apache.org/jira/browse/HDFS-12453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu reassigned HDFS-12453: Assignee: Lei (Eddy) Xu > TestDataNodeHotSwapVolumes fails in trunk Jenkins runs > -- > > Key: HDFS-12453 > URL: https://issues.apache.org/jira/browse/HDFS-12453 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Arpit Agarwal >Assignee: Lei (Eddy) Xu >Priority: Critical > Labels: flaky-test > Attachments: TestLogs.txt > > > TestDataNodeHotSwapVolumes fails occasionally with the following error (see > comment). Ran it ~10 times locally and it passed every time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin
[ https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182991#comment-16182991 ] Andrew Wang commented on HDFS-12257: DFS and DFSClient are private APIs, so we don't need to worry about user-visible deprecation. However, we still want to support backwards compatibility (old client -> new server). This can be as simple as adding a new paginated RPC method, though more complex schemes are possible if you want to extend the existing RPC call. > Expose getSnapshottableDirListing as a public API in HdfsAdmin > -- > > Key: HDFS-12257 > URL: https://issues.apache.org/jira/browse/HDFS-12257 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Affects Versions: 2.6.5 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Attachments: HDFS-12257.001.patch, HDFS-12257.002.patch > > > Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no > programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we > should expose listing there as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7878) API - expose an unique file identifier
[ https://issues.apache.org/jira/browse/HDFS-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182986#comment-16182986 ] Andrew Wang commented on HDFS-7878: --- Beta is our compatibility freeze for 3.0.0. I don't know the history of 2.1.0-beta to GA, but breaking compatibility between beta and GA violates the user expectations of a beta. > API - expose an unique file identifier > -- > > Key: HDFS-7878 > URL: https://issues.apache.org/jira/browse/HDFS-7878 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Labels: BB2015-05-TBR > Attachments: HDFS-7878.01.patch, HDFS-7878.02.patch, > HDFS-7878.03.patch, HDFS-7878.04.patch, HDFS-7878.05.patch, > HDFS-7878.06.patch, HDFS-7878.07.patch, HDFS-7878.08.patch, > HDFS-7878.09.patch, HDFS-7878.10.patch, HDFS-7878.11.patch, > HDFS-7878.12.patch, HDFS-7878.patch > > > See HDFS-487. > Even though that is resolved as duplicate, the ID is actually not exposed by > the JIRA it supposedly duplicates. > INode ID for the file should be easy to expose; alternatively ID could be > derived from block IDs, to account for appends... > This is useful e.g. for cache key by file, to make sure cache stays correct > when file is overwritten. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters
[ https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182979#comment-16182979 ] Hadoop QA commented on HDFS-12469: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 17s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12469 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889317/HDFS-12469-HDFS-7240.002.patch | | Optional Tests | asflicense shellcheck shelldocs mvnsite | | uname | Linux 1fe841bed107 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / fec0e74 | | shellcheck | v0.4.6 | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/21387/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21387/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Create docker-compose definition to easily test real clusters > > > Key: HDFS-12469 > URL: https://issues.apache.org/jira/browse/HDFS-12469 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Elek, Marton >Assignee: Elek, Marton > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12469-HDFS-7240.001.patch, > HDFS-12469-HDFS-7240.002.patch, HDFS-12469-HDFS-7240.WIP1.patch, > HDFS-12469-HDFS-7240.WIP2.patch > > > The goal here is to create a docker-compose definition for ozone > pseudo-cluster with docker (one component per container). > Ideally after a full build the ozone cluster could be started easily with > after a simple docker-compose up command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12488) Ozone: OzoneRestClient timeout is not configurable
[ https://issues.apache.org/jira/browse/HDFS-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182899#comment-16182899 ] Hadoop QA commented on HDFS-12488: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 49s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12488 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889306/HDFS-12488-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f203e8aef856 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / ea47519 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21384/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21384/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: OzoneRestClient timeout is not configurable > -- > >
[jira] [Updated] (HDFS-12501) Ozone: Cleanup javac issues
[ https://issues.apache.org/jira/browse/HDFS-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12501: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) > Ozone: Cleanup javac issues > --- > > Key: HDFS-12501 > URL: https://issues.apache.org/jira/browse/HDFS-12501 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Fix For: HDFS-7240 > > Attachments: HDFS-12501-HDFS-7240.001.patch > > > There is a bunch of javac issues under Ozone tree. We have to clean them up > before we call for a merge of this tree. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12501) Ozone: Cleanup javac issues
[ https://issues.apache.org/jira/browse/HDFS-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182894#comment-16182894 ] Anu Engineer commented on HDFS-12501: - +1, thank you for taking care of this. I have committed this to the feature branch. > Ozone: Cleanup javac issues > --- > > Key: HDFS-12501 > URL: https://issues.apache.org/jira/browse/HDFS-12501 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Fix For: HDFS-7240 > > Attachments: HDFS-12501-HDFS-7240.001.patch > > > There is a bunch of javac issues under Ozone tree. We have to clean them up > before we call for a merge of this tree. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12511) Ozone: Add tags to config
[ https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182885#comment-16182885 ] Ajay Kumar edited comment on HDFS-12511 at 9/27/17 4:57 PM: Hi [~anu], Test cases are unrelated to this patch. TestCommonConfigurationFields and TestDatanodeStateMachine fails irrespective of patch. Other two passes locally. was (Author: ajayydv): Hi [~anu], Test casses are unrelated to this patch. TestCommonConfigurationFields and TestDatanodeStateMachine fails irrespective of patch. Other two passes locally. > Ozone: Add tags to config > - > > Key: HDFS-12511 > URL: https://issues.apache.org/jira/browse/HDFS-12511 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12511-HDFS-7240.01.patch, > HDFS-12511-HDFS-7240.02.patch > > > Add tags to ozone config: > Example: > {code} > > ozone.ksm.handler.count.key > 200 > OZONE,PERFORMANCE,KSM > > The number of RPC handler threads for each KSM service endpoint. > > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12511) Ozone: Add tags to config
[ https://issues.apache.org/jira/browse/HDFS-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182885#comment-16182885 ] Ajay Kumar commented on HDFS-12511: --- Hi [~anu], Test casses are unrelated to this patch. TestCommonConfigurationFields and TestDatanodeStateMachine fails irrespective of patch. Other two passes locally. > Ozone: Add tags to config > - > > Key: HDFS-12511 > URL: https://issues.apache.org/jira/browse/HDFS-12511 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12511-HDFS-7240.01.patch, > HDFS-12511-HDFS-7240.02.patch > > > Add tags to ozone config: > Example: > {code} > > ozone.ksm.handler.count.key > 200 > OZONE,PERFORMANCE,KSM > > The number of RPC handler threads for each KSM service endpoint. > > > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode
[ https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182878#comment-16182878 ] Anu Engineer commented on HDFS-12467: - [~vagarychen] Can you please take a look at the v2 patch and commit if it looks good? Thanks > Ozone: SCM: NodeManager should log when it comes out of chill mode > -- > > Key: HDFS-12467 > URL: https://issues.apache.org/jira/browse/HDFS-12467 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12467-HDFS-7240.000.patch, > HDFS-12467-HDFS-7240.001.patch, HDFS-12467-HDFS-7240.002.patch > > > {{NodeManager}} should add a log message when it comes out of chill mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12321) Ozone : debug cli: add support to load user-provided SQL query
[ https://issues.apache.org/jira/browse/HDFS-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12321: Status: In Progress (was: Patch Available) Setting this to in progress since the last patch needs some more work from me. > Ozone : debug cli: add support to load user-provided SQL query > -- > > Key: HDFS-12321 > URL: https://issues.apache.org/jira/browse/HDFS-12321 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: OzonePostMerge > Fix For: ozone > > Attachments: HDFS-12321-HDFS-7240.001.patch, > HDFS-12321-HDFS-7240.002.patch, HDFS-12321-HDFS-7240.003.patch, > HDFS-12321-HDFS-7240.004.patch, HDFS-12321-HDFS-7240.005.patch, > HDFS-12321-HDFS-7240.006.patch, HDFS-12321-HDFS-7240.007.patch, > HDFS-12321-HDFS-7240.008.patch, HDFS-12321-HDFS-7240.009.patch, > HDFS-12321-HDFS-7240.010.patch > > > This JIRA extends SQL CLI to support loading a user-provided file that > includes any sql query the user wants to run on the SQLite db obtained by > converting Ozone metadata db. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11939) Ozone : add read/write random access to Chunks of a key
[ https://issues.apache.org/jira/browse/HDFS-11939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11939: Status: In Progress (was: Patch Available) Setting this to in progress, so that we can rebase and bring this back when we have committed the size less key writes. > Ozone : add read/write random access to Chunks of a key > --- > > Key: HDFS-11939 > URL: https://issues.apache.org/jira/browse/HDFS-11939 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang >Assignee: Chen Liang > Labels: OzonePostMerge > Attachments: HDFS-11939-HDFS-7240.001.patch, > HDFS-11939-HDFS-7240.002.patch, HDFS-11939-HDFS-7240.003.patch, > HDFS-11939-HDFS-7240.004.patch > > > In Ozone, the value of a key is a sequence of container chunks. Currently, > the only way to read/write the chunks is by using ChunkInputStream and > ChunkOutputStream. However, by the nature of streams, these classes are > currently implemented to only allow sequential read/write. > Ideally we would like to support random access of the chunks. For example, we > want to be able to seek to a specific offset and read/write some data. This > will be critical for key range read/write feature, and potentially important > for supporting parallel read/write. > This JIRA tracks adding support by implementing FileChannel class on top > Chunks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12000) Ozone: Container : Add key versioning support-1
[ https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12000: Status: In Progress (was: Patch Available) Setting this to In-progress since we will merge this patch back in after the current patches that support size-less key writes. > Ozone: Container : Add key versioning support-1 > --- > > Key: HDFS-12000 > URL: https://issues.apache.org/jira/browse/HDFS-12000 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Chen Liang > Labels: OzonePostMerge > Attachments: HDFS-12000-HDFS-7240.001.patch, > HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, > HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, > OzoneVersion.001.pdf > > > The rest interface of ozone supports versioning of keys. This support comes > from the containers and how chunks are managed to support this feature. This > JIRA tracks that feature. Will post a detailed design doc so that we can talk > about this feature. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12006) Ozone: add TestDistributedOzoneVolumesRatis, TestOzoneRestWithMiniClusterRatis and TestOzoneWebAccessRatis
[ https://issues.apache.org/jira/browse/HDFS-12006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12006: Status: Open (was: Patch Available) With the Ratis pipeline patch inplace, we can test ratis via ozone itself. Cancelling this patch for time being. > Ozone: add TestDistributedOzoneVolumesRatis, > TestOzoneRestWithMiniClusterRatis and TestOzoneWebAccessRatis > -- > > Key: HDFS-12006 > URL: https://issues.apache.org/jira/browse/HDFS-12006 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, test >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Minor > Attachments: HDFS-12006-HDFS-7240.20170623.patch, > HDFS-12006-HDFS-7240.20170717.patch > > > Add Ratis tests similar to TestDistributedOzoneVolumes, > TestOzoneRestWithMiniCluster and TestOzoneWebAccess. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12477) Ozone: Some minor text improvement in SCM web UI
[ https://issues.apache.org/jira/browse/HDFS-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182862#comment-16182862 ] Elek, Marton commented on HDFS-12477: - Thanks [~cheersyang] the feedback. I created a new Jira (HDFS-12557) to adjust the formatting of the rpc times. > Ozone: Some minor text improvement in SCM web UI > > > Key: HDFS-12477 > URL: https://issues.apache.org/jira/browse/HDFS-12477 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: scm, ui >Reporter: Weiwei Yang >Assignee: Elek, Marton >Priority: Trivial > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: haskey.png, HDFS-12477-HDFS-7240.000.patch, > healthy_nodes_place.png, Revise text.png > > > While trying out SCM UI, there seems to have some small text problems, > bq. Node Manager: Minimum chill mode nodes) > It has an extra ). > bq. $$hashKey object:9 > I am not really sure what does this mean? Would this help? > bq. Node counts > Can we place the HEALTHY ones at the top of the table? > bq. Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 > nodes have reported in. > Can we refine this text a bit? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12558) Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM web ui
Elek, Marton created HDFS-12558: --- Summary: Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM web ui Key: HDFS-12558 URL: https://issues.apache.org/jira/browse/HDFS-12558 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton In Ozone (SCM/KSM) web ui we have additional visualization if rpc.metrics.percentiles.intervals are enabled. But according to the feedbacks it's a little bit confusing what is it exactly. I would like to improve it and clarify how does it work. 1. I will to add a footnote about these are not rolling windows but just display of the last fixed window. 2. I would like to rearrange the layout. As the different windows are independent, I would show them in different lines and group by the intervals and not by RpcQueueTime/RpcProcessingTime. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12557) Ozone: Improve the formatting of the RPC stats on web UI
Elek, Marton created HDFS-12557: --- Summary: Ozone: Improve the formatting of the RPC stats on web UI Key: HDFS-12557 URL: https://issues.apache.org/jira/browse/HDFS-12557 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton During HDFS-12477 [~cheersyang] suggested to improve the formatting of the rpcmetrics in the KSM/SMC web ui: https://issues.apache.org/jira/browse/HDFS-12477?focusedCommentId=16177816=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177816 {quote} One more thing, it seems we have too much accuracy here Metric name Number of ops Average time RpcQueueTime300 0.167019333 RpcProcessingTime 300 6.5403023 maybe 0.167 and 6.540 is enough? And what is the unit of the average time, can we add the unit in the header column? {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters
[ https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12469: Attachment: HDFS-12469-HDFS-7240.002.patch Thanks for the feedback, [~anu] In this patch: 1. I simplified the documentation according to your suggestion. (I just added the docker-compose scale command at the end as I think this is one of the selling point. I really like this command). Later I will create a separated OzoneDocker page with the more details. (maybe after collecting feedback from the first adopters.). 2. I fixed the name of ozone.metadata.dirs as you suggested. 3. I added the rpc.metrics.quantile.* settings. It's better to see the metrics by default. 4. Modifed the datanode.id and metadata dir to use /data (which is volume, so more persistent) > Ozone: Create docker-compose definition to easily test real clusters > > > Key: HDFS-12469 > URL: https://issues.apache.org/jira/browse/HDFS-12469 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Elek, Marton >Assignee: Elek, Marton > Labels: OzonePostMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12469-HDFS-7240.001.patch, > HDFS-12469-HDFS-7240.002.patch, HDFS-12469-HDFS-7240.WIP1.patch, > HDFS-12469-HDFS-7240.WIP2.patch > > > The goal here is to create a docker-compose definition for ozone > pseudo-cluster with docker (one component per container). > Ideally after a full build the ozone cluster could be started easily with > after a simple docker-compose up command. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12291) [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy of all the files under the given dir
[ https://issues.apache.org/jira/browse/HDFS-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182842#comment-16182842 ] Hadoop QA commented on HDFS-12291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 7m 19s{color} | {color:red} Docker failed to build yetus/hadoop:14b5c93. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889271/HDFS-12291-HDFS-10285-08.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21386/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [SPS]: Provide a mechanism to recursively iterate and satisfy storage policy > of all the files under the given dir > - > > Key: HDFS-12291 > URL: https://issues.apache.org/jira/browse/HDFS-12291 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12291-HDFS-10285-01.patch, > HDFS-12291-HDFS-10285-02.patch, HDFS-12291-HDFS-10285-03.patch, > HDFS-12291-HDFS-10285-04.patch, HDFS-12291-HDFS-10285-05.patch, > HDFS-12291-HDFS-10285-06.patch, HDFS-12291-HDFS-10285-07.patch, > HDFS-12291-HDFS-10285-08.patch > > > For the given source path directory, presently SPS consider only the files > immediately under the directory(only one level of scanning) for satisfying > the policy. It WON’T do recursive directory scanning and then schedules SPS > tasks to satisfy the storage policy of all the files till the leaf node. > The idea of this jira is to discuss & implement an efficient recursive > directory iteration mechanism and satisfies storage policy for all the files > under the given directory. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8088) Reduce the number of HTrace spans generated by HDFS reads
[ https://issues.apache.org/jira/browse/HDFS-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182800#comment-16182800 ] Hadoop QA commented on HDFS-8088: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-8088 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-8088 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12724103/HDFS-8088.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21383/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Reduce the number of HTrace spans generated by HDFS reads > - > > Key: HDFS-8088 > URL: https://issues.apache.org/jira/browse/HDFS-8088 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Colin P. McCabe >Assignee: Colin P. McCabe > Attachments: HDFS-8088.001.patch > > > HDFS generates too many trace spans on read right now. Every call to read() > we make generates its own span, which is not very practical for things like > HBase or Accumulo that do many such reads as part of a single operation. > Instead of tracing every call to read(), we should only trace the cases where > we refill the buffer inside a BlockReader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12420) Add an option to disallow 'namenode format -force'
[ https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182732#comment-16182732 ] Ajay Kumar edited comment on HDFS-12420 at 9/27/17 3:22 PM: [~arpitagarwal] thanks for review. Test failures are unrelated. was (Author: ajayydv): Test failures are unrelated. > Add an option to disallow 'namenode format -force' > -- > > Key: HDFS-12420 > URL: https://issues.apache.org/jira/browse/HDFS-12420 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, > HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, > HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, > HDFS-12420.09.patch, HDFS-12420.10.patch, HDFS-12420.11.patch > > > Support for disabling NameNode format to avoid accidental formatting of > Namenode in production cluster. If someone really wants to delete the > complete fsImage, they can first delete the metadata dir and then run {code} > hdfs namenode -format{code} manually. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12420) Add an option to disallow 'namenode format -force'
[ https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182732#comment-16182732 ] Ajay Kumar commented on HDFS-12420: --- Test failures are unrelated. > Add an option to disallow 'namenode format -force' > -- > > Key: HDFS-12420 > URL: https://issues.apache.org/jira/browse/HDFS-12420 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, > HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, > HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, > HDFS-12420.09.patch, HDFS-12420.10.patch, HDFS-12420.11.patch > > > Support for disabling NameNode format to avoid accidental formatting of > Namenode in production cluster. If someone really wants to delete the > complete fsImage, they can first delete the metadata dir and then run {code} > hdfs namenode -format{code} manually. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12488) Ozone: OzoneRestClient timeout is not configurable
[ https://issues.apache.org/jira/browse/HDFS-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12488: --- Status: Patch Available (was: Open) > Ozone: OzoneRestClient timeout is not configurable > -- > > Key: HDFS-12488 > URL: https://issues.apache.org/jira/browse/HDFS-12488 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: OzonePostMerge > Attachments: HDFS-12488-HDFS-7240.001.patch > > > When I test ozone on a 15 nodes cluster with millions of keys, responses of > rest client becomes to be slower. Following call times out after default 5s, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-84022 -user wwei > Command Failed : {"httpCode":0,"shortMessage":"Read timed > out","resource":null,"message":"Read timed > out","requestID":null,"hostName":null} > {code} > Then I increase the timeout by explicitly setting following property in > {{ozone-site.xml}} > {code} > > ozone.client.socket.timeout.ms > 1 > > {code} > but this doesn't work and rest clients are still created with default *5s* > timeout. This needs to be fixed. Just like {{DFSClient}}, we should make > {{OzoneRestClient}} to be configuration awareness, so that clients can adjust > client configuration on demand. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12488) Ozone: OzoneRestClient timeout is not configurable
[ https://issues.apache.org/jira/browse/HDFS-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12488: --- Summary: Ozone: OzoneRestClient timeout is not configurable (was: Ozone: OzoneRestClient needs to be configuration awareness) > Ozone: OzoneRestClient timeout is not configurable > -- > > Key: HDFS-12488 > URL: https://issues.apache.org/jira/browse/HDFS-12488 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: OzonePostMerge > Attachments: HDFS-12488-HDFS-7240.001.patch > > > When I test ozone on a 15 nodes cluster with millions of keys, responses of > rest client becomes to be slower. Following call times out after default 5s, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-84022 -user wwei > Command Failed : {"httpCode":0,"shortMessage":"Read timed > out","resource":null,"message":"Read timed > out","requestID":null,"hostName":null} > {code} > Then I increase the timeout by explicitly setting following property in > {{ozone-site.xml}} > {code} > > ozone.client.socket.timeout.ms > 1 > > {code} > but this doesn't work and rest clients are still created with default *5s* > timeout. This needs to be fixed. Just like {{DFSClient}}, we should make > {{OzoneRestClient}} to be configuration awareness, so that clients can adjust > client configuration on demand. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12488) Ozone: OzoneRestClient needs to be configuration awareness
[ https://issues.apache.org/jira/browse/HDFS-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182676#comment-16182676 ] Weiwei Yang commented on HDFS-12488: When working on a fix, I found that it is not necessarily let {{OzoneRestClient}} be configuration awareness, instead we can simply let {{OzoneClientUtils#newHttpClient}} loads configuration from conf files. Uploaded a patch with 1 line code fix. I have tested on a cluster, now timeout can be configured via {{ozone-site.xml}} file. Please kindly review, thanks. > Ozone: OzoneRestClient needs to be configuration awareness > -- > > Key: HDFS-12488 > URL: https://issues.apache.org/jira/browse/HDFS-12488 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: OzonePostMerge > > When I test ozone on a 15 nodes cluster with millions of keys, responses of > rest client becomes to be slower. Following call times out after default 5s, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-84022 -user wwei > Command Failed : {"httpCode":0,"shortMessage":"Read timed > out","resource":null,"message":"Read timed > out","requestID":null,"hostName":null} > {code} > Then I increase the timeout by explicitly setting following property in > {{ozone-site.xml}} > {code} > > ozone.client.socket.timeout.ms > 1 > > {code} > but this doesn't work and rest clients are still created with default *5s* > timeout. This needs to be fixed. Just like {{DFSClient}}, we should make > {{OzoneRestClient}} to be configuration awareness, so that clients can adjust > client configuration on demand. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12488) Ozone: OzoneRestClient needs to be configuration awareness
[ https://issues.apache.org/jira/browse/HDFS-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12488: --- Attachment: HDFS-12488-HDFS-7240.001.patch > Ozone: OzoneRestClient needs to be configuration awareness > -- > > Key: HDFS-12488 > URL: https://issues.apache.org/jira/browse/HDFS-12488 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: OzonePostMerge > Attachments: HDFS-12488-HDFS-7240.001.patch > > > When I test ozone on a 15 nodes cluster with millions of keys, responses of > rest client becomes to be slower. Following call times out after default 5s, > {code} > bin/hdfs oz -listBucket http://15oz1.fyre.ibm.com:9864/vol-0-84022 -user wwei > Command Failed : {"httpCode":0,"shortMessage":"Read timed > out","resource":null,"message":"Read timed > out","requestID":null,"hostName":null} > {code} > Then I increase the timeout by explicitly setting following property in > {{ozone-site.xml}} > {code} > > ozone.client.socket.timeout.ms > 1 > > {code} > but this doesn't work and rest clients are still created with default *5s* > timeout. This needs to be fixed. Just like {{DFSClient}}, we should make > {{OzoneRestClient}} to be configuration awareness, so that clients can adjust > client configuration on demand. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12540) Ozone: node status text reported by SCM is a bit confusing
[ https://issues.apache.org/jira/browse/HDFS-12540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182681#comment-16182681 ] Weiwei Yang commented on HDFS-12540: Thanks [~anu], I will commit this once I get a clean jenkins result. Thank for the review and +1 :). > Ozone: node status text reported by SCM is a bit confusing > -- > > Key: HDFS-12540 > URL: https://issues.apache.org/jira/browse/HDFS-12540 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Weiwei Yang >Priority: Trivial > Labels: ozoneMerge > Attachments: chillmode_status.png, HDFS-12540-HDFS-7240.001.patch, > outchillmode_status.png > > > At present SCM UI displays node status like following > {noformat} > Node Manager: Chill mode status: Out of chill mode. 15 of out of total 1 > nodes have reported in. > {noformat} > this text is a bit confusing. UI retrieves status from > {{SCMNodeManager#getNodeStatus}}, related call is {{#getChillModeStatus}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org