[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port
[ https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159752#comment-16159752 ] Hadoop QA commented on HDFS-10391: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 25 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-hdfs-project: The patch generated 21 new + 1247 unchanged - 40 fixed = 1268 total (was 1287) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | |
[jira] [Commented] (HDFS-12412) Remove ErasureCodingWorker.stripedReadPool
[ https://issues.apache.org/jira/browse/HDFS-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159682#comment-16159682 ] Kai Zheng commented on HDFS-12412: -- Thanks Eddy for the ping. The idea to remove the striped read pool and reuse the same reconstruction pool sounds good to me, since given the later and the most often used erasure codec, we can roughly estimate the striped read threads need. We can also simplify the configuration and codes. So as you said, you probably have the idea how to reduce the recommended value or default value and validate the configuration value for the reconstruction pool size, assuming you know how many concurrent reconstruction tasks to be performed and so on. Less configuration with reasonable defaults would make the brand feature more easier to use. When needed, we can fine-tune and add more later. > Remove ErasureCodingWorker.stripedReadPool > -- > > Key: HDFS-12412 > URL: https://issues.apache.org/jira/browse/HDFS-12412 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > > In {{ErasureCodingWorker}}, it uses {{stripedReconstructionPool}} to schedule > the EC recovery tasks, while uses {{stripedReadPool}} for the reader threads > in each recovery task. We only need one of them to throttle the speed of > recovery process, because each EC recovery task has a fix number of source > readers (i.e., 3 for RS(3,2)). And because of the findings in HDFS-12044, the > speed of EC recovery can be throttled by {{strippedReconstructionPool}} with > {{xmitsInProgress}}. > Moreover, keeping {{stripedReadPool}} makes customer difficult to understand > and calculate the right balance between > {{dfs.datanode.ec.reconstruction.stripedread.threads}}, > {{dfs.datanode.ec.reconstruction.stripedblock.threads.size}} and > {{maxReplicationStreams}}. For example, a small {{stripread.threads}} > (comparing to which {{reconstruction.threads.size}} implies), will > unnecessarily limit the speed of recovery, which leads to larger MTTR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
[ https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159671#comment-16159671 ] Weiwei Yang commented on HDFS-12235: The UT failures were not related, [~anu], [~vagarychen], [~nandakumar131], please let me know if v11 patch looks good to you. Thanks a lot. > Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions > --- > > Key: HDFS-12235 > URL: https://issues.apache.org/jira/browse/HDFS-12235 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: ozoneMerge > Attachments: HDFS-12235-HDFS-7240.001.patch, > HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, > HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, > HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, > HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch, > HDFS-12235-HDFS-7240.010.patch, HDFS-12235-HDFS-7240.011.patch > > > KSM and SCM interaction for delete key operation, both KSM and SCM stores key > state info in a backlog, KSM needs to scan this log and send block-deletion > command to SCM, once SCM is fully aware of the message, KSM removes the key > completely from namespace. See more from the design doc under HDFS-11922, > this is task break down 2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12409) Add metrics of execution time of different stages in EC recovery task
[ https://issues.apache.org/jira/browse/HDFS-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159650#comment-16159650 ] Hadoop QA commented on HDFS-12409: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 68 unchanged - 0 fixed = 71 total (was 68) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}206m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}237m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestMissingBlocksAlert | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestFileAppendRestart | | |
[jira] [Commented] (HDFS-11676) Ozone: SCM CLI: Implement close container command
[ https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159604#comment-16159604 ] Hadoop QA commented on HDFS-11676: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 10 unchanged - 0 fixed = 12 total (was 10) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 11s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 36s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}128m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Switch statement found in org.apache.hadoop.ozone.scm.container.ContainerMapping.closeContainer(String) where default case is missing At ContainerMapping.java:where default case is missing At ContainerMapping.java:[lines 295-306] | | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency | | | hadoop.ozone.scm.TestAllocateContainer | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.cblock.TestCBlockReadWrite | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.namenode.TestReencryption | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.ozone.scm.node.TestNodeManager | | Timed out junit tests |
[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port
[ https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159526#comment-16159526 ] Xiaoyu Yao commented on HDFS-10391: --- Thanks [~arpitagarwal] and [~GergelyNovak] for the update. Patch v10 looks good to me and +1. > Always enable NameNode service RPC port > --- > > Key: HDFS-10391 > URL: https://issues.apache.org/jira/browse/HDFS-10391 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, namenode >Reporter: Arpit Agarwal >Assignee: Gergely Novák > Labels: Incompatible > Attachments: HDFS-10391.001.patch, HDFS-10391.002.patch, > HDFS-10391.003.patch, HDFS-10391.004.patch, HDFS-10391.005.patch, > HDFS-10391.006.patch, HDFS-10391.007.patch, HDFS-10391.008.patch, > HDFS-10391.009.patch, HDFS-10391.010.patch, HDFS-10391.v5-v6-delta.patch > > > The NameNode should always be setup with a service RPC port so that it does > not have to be explicitly enabled by an administrator. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12412) Remove ErasureCodingWorker.stripedReadPool
[ https://issues.apache.org/jira/browse/HDFS-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159488#comment-16159488 ] Lei (Eddy) Xu edited comment on HDFS-12412 at 9/8/17 11:11 PM: --- Ping [~drankye] [~Sammi] [~andrew.wang] what would you think? was (Author: eddyxu): Ping [~drankye] [~Sammi] [~andrew.wang] for the inputs. > Remove ErasureCodingWorker.stripedReadPool > -- > > Key: HDFS-12412 > URL: https://issues.apache.org/jira/browse/HDFS-12412 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > > In {{ErasureCodingWorker}}, it uses {{stripedReconstructionPool}} to schedule > the EC recovery tasks, while uses {{stripedReadPool}} for the reader threads > in each recovery task. We only need one of them to throttle the speed of > recovery process, because each EC recovery task has a fix number of source > readers (i.e., 3 for RS(3,2)). And because of the findings in HDFS-12044, the > speed of EC recovery can be throttled by {{strippedReconstructionPool}} with > {{xmitsInProgress}}. > Moreover, keeping {{stripedReadPool}} makes customer difficult to understand > and calculate the right balance between > {{dfs.datanode.ec.reconstruction.stripedread.threads}}, > {{dfs.datanode.ec.reconstruction.stripedblock.threads.size}} and > {{maxReplicationStreams}}. For example, a small {{stripread.threads}} > (comparing to which {{reconstruction.threads.size}} implies), will > unnecessarily limit the speed of recovery, which leads to larger MTTR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159493#comment-16159493 ] Hadoop QA commented on HDFS-12407: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 17 unchanged - 1 fixed = 17 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}130m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | |
[jira] [Commented] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159491#comment-16159491 ] Hadoop QA commented on HDFS-12407: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 17 unchanged - 1 fixed = 17 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.tools.TestDFSZKFailoverController | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestReconstructStripedFile | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12407 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886161/HDFS-12407.02.patch | |
[jira] [Commented] (HDFS-12412) Remove ErasureCodingWorker.stripedReadPool
[ https://issues.apache.org/jira/browse/HDFS-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159488#comment-16159488 ] Lei (Eddy) Xu commented on HDFS-12412: -- Ping [~drankye] [~Sammi] [~andrew.wang] for the inputs. > Remove ErasureCodingWorker.stripedReadPool > -- > > Key: HDFS-12412 > URL: https://issues.apache.org/jira/browse/HDFS-12412 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > > In {{ErasureCodingWorker}}, it uses {{stripedReconstructionPool}} to schedule > the EC recovery tasks, while uses {{stripedReadPool}} for the reader threads > in each recovery task. We only need one of them to throttle the speed of > recovery process, because each EC recovery task has a fix number of source > readers (i.e., 3 for RS(3,2)). And because of the findings in HDFS-12044, the > speed of EC recovery can be throttled by {{strippedReconstructionPool}} with > {{xmitsInProgress}}. > Moreover, keeping {{stripedReadPool}} makes customer difficult to understand > and calculate the right balance between > {{dfs.datanode.ec.reconstruction.stripedread.threads}}, > {{dfs.datanode.ec.reconstruction.stripedblock.threads.size}} and > {{maxReplicationStreams}}. For example, a small {{stripread.threads}} > (comparing to which {{reconstruction.threads.size}} implies), will > unnecessarily limit the speed of recovery, which leads to larger MTTR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12412) Remove ErasureCodingWorker.stripedReadPool
Lei (Eddy) Xu created HDFS-12412: Summary: Remove ErasureCodingWorker.stripedReadPool Key: HDFS-12412 URL: https://issues.apache.org/jira/browse/HDFS-12412 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Affects Versions: 3.0.0-alpha3 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu In {{ErasureCodingWorker}}, it uses {{stripedReconstructionPool}} to schedule the EC recovery tasks, while uses {{stripedReadPool}} for the reader threads in each recovery task. We only need one of them to throttle the speed of recovery process, because each EC recovery task has a fix number of source readers (i.e., 3 for RS(3,2)). And because of the findings in HDFS-12044, the speed of EC recovery can be throttled by {{strippedReconstructionPool}} with {{xmitsInProgress}}. Moreover, keeping {{stripedReadPool}} makes customer difficult to understand and calculate the right balance between {{dfs.datanode.ec.reconstruction.stripedread.threads}}, {{dfs.datanode.ec.reconstruction.stripedblock.threads.size}} and {{maxReplicationStreams}}. For example, a small {{stripread.threads}} (comparing to which {{reconstruction.threads.size}} implies), will unnecessarily limit the speed of recovery, which leads to larger MTTR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12411) Ozone: Add container usage information to DN container report
Xiaoyu Yao created HDFS-12411: - Summary: Ozone: Add container usage information to DN container report Key: HDFS-12411 URL: https://issues.apache.org/jira/browse/HDFS-12411 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone, scm Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Current DN ReportState for container only has a counter, we will need to include individual container usage information so that SCM can * close container when they are full * assign container for block service with different policies. * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11676) Ozone: SCM CLI: Implement close container command
[ https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11676: -- Attachment: HDFS-11676-HDFS-7240.001.patch Thanks [~anu] for the heads-up! Post v001 patch, kindly review. > Ozone: SCM CLI: Implement close container command > - > > Key: HDFS-11676 > URL: https://issues.apache.org/jira/browse/HDFS-11676 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Chen Liang > Labels: ozoneMerge, tocheck > Attachments: HDFS-11676-HDFS-7240.001.patch > > > Implement close container command > {code} > hdfs scm -container close > {code} > This command connects to SCM and closes a container. Once the container is > closed in the SCM, the corresponding container is closed at the appropriate > datanode. if the container does not exist, it will return an error. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11676) Ozone: SCM CLI: Implement close container command
[ https://issues.apache.org/jira/browse/HDFS-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Liang updated HDFS-11676: -- Status: Patch Available (was: Open) > Ozone: SCM CLI: Implement close container command > - > > Key: HDFS-11676 > URL: https://issues.apache.org/jira/browse/HDFS-11676 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Chen Liang > Labels: ozoneMerge, tocheck > Attachments: HDFS-11676-HDFS-7240.001.patch > > > Implement close container command > {code} > hdfs scm -container close > {code} > This command connects to SCM and closes a container. Once the container is > closed in the SCM, the corresponding container is closed at the appropriate > datanode. if the container does not exist, it will return an error. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159447#comment-16159447 ] Íñigo Goiri commented on HDFS-12273: The javac errors are not related to this commit; I've rebased HDFS-10467 to get rid of some of the unit tests. [~giovanni.fumarola], do you mind taking a look? > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12410) Ignore unknown StorageTypes
[ https://issues.apache.org/jira/browse/HDFS-12410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HDFS-12410: - Assignee: Ajay Kumar > Ignore unknown StorageTypes > --- > > Key: HDFS-12410 > URL: https://issues.apache.org/jira/browse/HDFS-12410 > Project: Hadoop HDFS > Issue Type: Task > Components: datanode, fs >Reporter: Chris Douglas >Assignee: Ajay Kumar >Priority: Minor > > A storage configured with an unknown type will cause runtime exceptions. > Instead, these storages can be ignored/skipped. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12409) Add metrics of execution time of different stages in EC recovery task
[ https://issues.apache.org/jira/browse/HDFS-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12409: - Attachment: HDFS-12409.00.patch Add 3 metrics to measure the time spent on reading from sources, decoding, and writing to the targets. > Add metrics of execution time of different stages in EC recovery task > - > > Key: HDFS-12409 > URL: https://issues.apache.org/jira/browse/HDFS-12409 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Attachments: HDFS-12409.00.patch > > > Admin can use more metrics to monitor EC recovery tasks, to get insights to > tune recovery performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12409) Add metrics of execution time of different stages in EC recovery task
[ https://issues.apache.org/jira/browse/HDFS-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12409: - Status: Patch Available (was: Open) > Add metrics of execution time of different stages in EC recovery task > - > > Key: HDFS-12409 > URL: https://issues.apache.org/jira/browse/HDFS-12409 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Attachments: HDFS-12409.00.patch > > > Admin can use more metrics to monitor EC recovery tasks, to get insights to > tune recovery performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12410) Ignore unknown StorageTypes
Chris Douglas created HDFS-12410: Summary: Ignore unknown StorageTypes Key: HDFS-12410 URL: https://issues.apache.org/jira/browse/HDFS-12410 Project: Hadoop HDFS Issue Type: Task Components: datanode, fs Reporter: Chris Douglas Priority: Minor A storage configured with an unknown type will cause runtime exceptions. Instead, these storages can be ignored/skipped. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12409) Add metrics of execution time of different stages in EC recovery task
[ https://issues.apache.org/jira/browse/HDFS-12409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12409: - Summary: Add metrics of execution time of different stages in EC recovery task (was: Add metrics of execution time of EC recovery tasks) > Add metrics of execution time of different stages in EC recovery task > - > > Key: HDFS-12409 > URL: https://issues.apache.org/jira/browse/HDFS-12409 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-alpha3 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > > Admin can use more metrics to monitor EC recovery tasks, to get insights to > tune recovery performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout
[ https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159381#comment-16159381 ] Erik Krogen commented on HDFS-12323: TestLeaseRecoveryStriped is failing for me with or without this patch. I was unable to reproduce TestNameNodeMetadataConsistency, TestReencryptionHandler, TestEncryptionZones, or TestWriteReadStripedFile failures locally. Looks like the TestDFSStripedOutputStreamWithFailure* tests are flaky; I see failures on other JIRAs e.g. HDFS-12386. > NameNode terminates after full GC thinking QJM unresponsive if full GC is > much longer than timeout > -- > > Key: HDFS-12323 > URL: https://issues.apache.org/jira/browse/HDFS-12323 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, qjm >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch > > > HDFS-10733 attempted to fix the issue where the Namenode process would > terminate itself if it had a GC pause which lasted longer than the QJM > timeout, since it would think that the QJM had taken too long to respond. > However, it only bumps up the timeout expiration by one timeout length, so if > the GC pause was e.g. 2x the length of the timeout, a TimeoutException will > be thrown and the NN will still terminate itself. > Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we > have also seen this issue on a real cluster even after HDFS-10733 is applied. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12409) Add metrics of execution time of EC recovery tasks
Lei (Eddy) Xu created HDFS-12409: Summary: Add metrics of execution time of EC recovery tasks Key: HDFS-12409 URL: https://issues.apache.org/jira/browse/HDFS-12409 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Affects Versions: 3.0.0-alpha3 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Priority: Minor Admin can use more metrics to monitor EC recovery tasks, to get insights to tune recovery performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12407: -- Attachment: HDFS-12407.02.patch > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12407.01.patch, HDFS-12407.02.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12407: -- Attachment: (was: BUG-87639.02.patch) > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12407.01.patch, HDFS-12407.02.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159303#comment-16159303 ] Ajay Kumar edited comment on HDFS-12407 at 9/8/17 8:43 PM: --- test failures are not related. Uploaded new patch for checkstyle suggestion. was (Author: ajayydv): test failures are not related. Uploaded new patch for stylecheck suggestion. > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: BUG-87639.02.patch, HDFS-12407.01.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159303#comment-16159303 ] Ajay Kumar commented on HDFS-12407: --- test failures are not related. Uploaded new patch for stylecheck suggestion. > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: BUG-87639.02.patch, HDFS-12407.01.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12407: -- Attachment: BUG-87639.02.patch > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: BUG-87639.02.patch, HDFS-12407.01.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
[ https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159294#comment-16159294 ] Lei (Eddy) Xu commented on HDFS-10701: -- Ping [~Sammi], [~drankye] could you take a look of this? It happens more frequently now. > TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails > -- > > Key: HDFS-10701 > URL: https://issues.apache.org/jira/browse/HDFS-10701 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang > > I noticed this test failure in a recent precommit build, and I also found > this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. > But I do not have sufficient knowledge to tell if it's a flaky test or a bug > in the code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159207#comment-16159207 ] Hadoop QA commented on HDFS-12386: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 4s{color} | {color:red} hadoop-hdfs-project generated 1 new + 447 unchanged - 0 fixed = 448 total (was 447) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 54s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 418 unchanged - 0 fixed = 419 total (was 418) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 32s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.TestFileAppendRestart | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.TestFileAppend3 | | |
[jira] [Commented] (HDFS-12408) Many EC tests fail in trunk
[ https://issues.apache.org/jira/browse/HDFS-12408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159195#comment-16159195 ] Arpit Agarwal commented on HDFS-12408: -- I am raising this as a 3.0.0-beta1 blocker so we can drive towards clean test runs for the beta release. There are a few non-EC tests that also seem to be flaky, will file separate Jiras for those. cc [~andrew.wang] as the 3.0.0-beta1 RM. > Many EC tests fail in trunk > --- > > Key: HDFS-12408 > URL: https://issues.apache.org/jira/browse/HDFS-12408 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: Arpit Agarwal >Priority: Blocker > > Many EC tests seem to be failing in pre-commit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/21055/testReport/ > https://builds.apache.org/job/PreCommit-HDFS-Build/21052/testReport/ > https://builds.apache.org/job/PreCommit-HDFS-Build/21048/testReport/ > This is creating a lot of noise in Jenkins runs outputs. We should either fix > or disable these tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12408) Many EC tests fail in trunk
[ https://issues.apache.org/jira/browse/HDFS-12408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12408: - Description: Many EC tests are failing in pre-commit runs. e.g. https://builds.apache.org/job/PreCommit-HDFS-Build/21055/testReport/ https://builds.apache.org/job/PreCommit-HDFS-Build/21052/testReport/ https://builds.apache.org/job/PreCommit-HDFS-Build/21048/testReport/ This is creating a lot of noise in Jenkins runs outputs. We should either fix or disable these tests. was: Many EC tests seem to be failing in pre-commit runs. e.g. https://builds.apache.org/job/PreCommit-HDFS-Build/21055/testReport/ https://builds.apache.org/job/PreCommit-HDFS-Build/21052/testReport/ https://builds.apache.org/job/PreCommit-HDFS-Build/21048/testReport/ This is creating a lot of noise in Jenkins runs outputs. We should either fix or disable these tests. > Many EC tests fail in trunk > --- > > Key: HDFS-12408 > URL: https://issues.apache.org/jira/browse/HDFS-12408 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha4 >Reporter: Arpit Agarwal >Priority: Blocker > > Many EC tests are failing in pre-commit runs. e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/21055/testReport/ > https://builds.apache.org/job/PreCommit-HDFS-Build/21052/testReport/ > https://builds.apache.org/job/PreCommit-HDFS-Build/21048/testReport/ > This is creating a lot of noise in Jenkins runs outputs. We should either fix > or disable these tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12408) Many EC tests fail in trunk
Arpit Agarwal created HDFS-12408: Summary: Many EC tests fail in trunk Key: HDFS-12408 URL: https://issues.apache.org/jira/browse/HDFS-12408 Project: Hadoop HDFS Issue Type: Bug Components: erasure-coding Affects Versions: 3.0.0-alpha4 Reporter: Arpit Agarwal Priority: Blocker Many EC tests seem to be failing in pre-commit runs. e.g. https://builds.apache.org/job/PreCommit-HDFS-Build/21055/testReport/ https://builds.apache.org/job/PreCommit-HDFS-Build/21052/testReport/ https://builds.apache.org/job/PreCommit-HDFS-Build/21048/testReport/ This is creating a lot of noise in Jenkins runs outputs. We should either fix or disable these tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159183#comment-16159183 ] Hadoop QA commented on HDFS-12407: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 17 unchanged - 1 fixed = 18 total (was 18) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}124m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestFileAppendRestart | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestFileCorruption | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12407 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886123/HDFS-12407.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux
[jira] [Commented] (HDFS-12385) Ozone: OzoneClient: Refactoring OzoneClient API
[ https://issues.apache.org/jira/browse/HDFS-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159158#comment-16159158 ] Xiaoyu Yao commented on HDFS-12385: --- Thanks [~nandakumar131] for working on this. The patch looks good to me overall. I just have a few comments below. *ClientProtocol.java* Line 38-41: NIT: suggest edit: The protocol used for communication is determined by the implementation class specified by property ozone.client.protocol. The build-in implementation includes: org.apache.hadoop.ozone.client.rest.RestProtocol for REST and org.apache.hadoop.ozone.client.rpc.RpcProtocol for RPC. Please fix the java doc for the protocol interface Line 82-83: the @return and @throws are incorrect. Line 150: missing @param for addAcls Line 163: missing @param for removeAcls Line 170: missing @param for versioning Line 180: missing @param for storageType Line 200: missing @param for bucketName Line 219: @return is incorrect Line 229: missing @param for keyName Line 233 @return is incorrect Line 241: missing @param for keyName Line 250: missing @param for parameters ... *OzoneClient.java* I understand that we want to have a proxy handle inside the OzoneVolume/OzoneBucket/OzoneKey object so that client can simply use volume object to create bucket, etc. transparently. I like the document and example between Line 33-65, can you add one more entity on the right (ClientProtocol) and the referent to it from Store/Volume/Bucket/Key? One question I have is how do we expect the client to handle the dangling reference inside the volume/bucket/key when the AutoClosable OzoneClient is closed? Do we need reference counting for the proxy handle before the OzoneClient can be closed? *RestProtocol.java* Suggest change the name of the client from "RestProtocol" to "RestClient" Line 46: NIT: "Ozone client REST protocol implementation. It uses REST protocol to connect to Ozone Handler that executes client calls" Line 80: ozone.rest.servers? Line 83: Will"localhost" will work with non-standalone case? Do we have an open ticket for implement the RestProtol class? *RpcProtocol.java* Suggest change the name to RpcClient instead of RpcProtocol *TestOzoneRpcClient.java* Line 81: we will need to close ozClient to avoid leaking. > Ozone: OzoneClient: Refactoring OzoneClient API > --- > > Key: HDFS-12385 > URL: https://issues.apache.org/jira/browse/HDFS-12385 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12385-HDFS-7240.000.patch, OzoneClient.pdf > > > This jira is for refactoring {{OzoneClient}} API. [^OzoneClient.pdf] will > give an idea on how the API will look. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port
[ https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159143#comment-16159143 ] Arpit Agarwal commented on HDFS-10391: -- So it looks like SecondaryNameNode+Federation is already broken without this patch. I have a test case to repro the problem. The problem appears to be in DFSUtil#getSuffixIDs. So this patch appears fine to me. I will file a separate Jira to address the SNN+federation problem. [~xyao], are you still +1 on the v10 patch which addresses the TestNameNodeHttpServerXFrame failure? Thanks. > Always enable NameNode service RPC port > --- > > Key: HDFS-10391 > URL: https://issues.apache.org/jira/browse/HDFS-10391 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, namenode >Reporter: Arpit Agarwal >Assignee: Gergely Novák > Labels: Incompatible > Attachments: HDFS-10391.001.patch, HDFS-10391.002.patch, > HDFS-10391.003.patch, HDFS-10391.004.patch, HDFS-10391.005.patch, > HDFS-10391.006.patch, HDFS-10391.007.patch, HDFS-10391.008.patch, > HDFS-10391.009.patch, HDFS-10391.010.patch, HDFS-10391.v5-v6-delta.patch > > > The NameNode should always be setup with a service RPC port so that it does > not have to be explicitly enabled by an administrator. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159123#comment-16159123 ] Hadoop QA commented on HDFS-12273: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-10467 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 1s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} HDFS-10467 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 50s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 5 new + 403 unchanged - 5 fixed = 408 total (was 408) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 12s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestParallelShortCircuitReadUnCached | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestListFilesInFileContext | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12273 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886116/HDFS-12273-HDFS-10467-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 5437e609bbe6 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10467 / d522007 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac |
[jira] [Updated] (HDFS-10738) Fix TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration test failure
[ https://issues.apache.org/jira/browse/HDFS-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-10738: --- Fix Version/s: 2.7.5 > Fix TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration test > failure > > > Key: HDFS-10738 > URL: https://issues.apache.org/jira/browse/HDFS-10738 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Rakesh R >Assignee: Rakesh R > Fix For: 2.8.0, 3.0.0-alpha1, 2.7.5 > > Attachments: HDFS-10738-00.patch, HDFS-10738-01.patch > > > This jira is to analyse and fix the test case failure, which is failing in > Jenkins build, > [Build_16326|https://builds.apache.org/job/PreCommit-HDFS-Build/16326/testReport/org.apache.hadoop.security/TestRefreshUserMappings/testRefreshSuperUserGroupsConfiguration/] > very frequently. > {code} > Error Message > first auth for user2 should've succeeded: User: super_userL is not allowed to > impersonate userL2 > Stacktrace > java.lang.AssertionError: first auth for user2 should've succeeded: User: > super_userL is not allowed to impersonate userL2 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:200) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12397) Ozone: KSM: multiple delete methods in KSMMetadataManager
[ https://issues.apache.org/jira/browse/HDFS-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159021#comment-16159021 ] Hadoop QA commented on HDFS-12397: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 54s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.ozone.container.common.impl.TestContainerPersistence | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.cblock.TestCBlockReadWrite | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | | | org.apache.hadoop.ozone.web.client.TestKeys | | | org.apache.hadoop.cblock.TestLocalBlockCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12397 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886097/HDFS-12397-HDFS-7240.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c1e1eb525208 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / e319be9 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21053/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-12406) dfsadmin command prints "Exception encountered" even if there is no exception, when debug is enabled
[ https://issues.apache.org/jira/browse/HDFS-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158987#comment-16158987 ] Hadoop QA commented on HDFS-12406: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}127m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}160m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDataTransferProtocol | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestEncryptionZones | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.server.namenode.TestReencryption | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | |
[jira] [Commented] (HDFS-12406) dfsadmin command prints "Exception encountered" even if there is no exception, when debug is enabled
[ https://issues.apache.org/jira/browse/HDFS-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158983#comment-16158983 ] Hanisha Koneru commented on HDFS-12406: --- Thanks for the fix, [~nandakumar131]. The patch LGTM. +1 (non-binding). > dfsadmin command prints "Exception encountered" even if there is no > exception, when debug is enabled > - > > Key: HDFS-12406 > URL: https://issues.apache.org/jira/browse/HDFS-12406 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12406.000.patch > > > In DFSAdmin we are printing {{"Exception encountered"}} at debug level for > all the calls even if there is no exception. > {code:title=DFSAdmin#run} > if (LOG.isDebugEnabled()) { > LOG.debug("Exception encountered:", debugException); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism
[ https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158980#comment-16158980 ] Chen Liang commented on HDFS-12387: --- Also just noticed another thing...this is not really about the change added in this JIRA though. Is it correct at all to call {{ContainerProtocolCalls#createContainer}} in {{ChunkGroupOutputStream}} directly? Because {{ContainerOperationClient#createContainer}} has this state machine transition before and after calling {{ContainerProtocolCalls.createContainer}}, this gives me the impression that anywhere else should be calling {{ContainerOperationClient#createContainer}} rather than {{ContainerProtocolCalls.createContainer}} directly because doing this bypasses that state machine... Please correct me if I'm wrong... > Ozone: Support Ratis as a first class replication mechanism > --- > > Key: HDFS-12387 > URL: https://issues.apache.org/jira/browse/HDFS-12387 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-12387-HDFS-7240.001.patch > > > Ozone container layer supports pluggable replication policies. This JIRA > brings Apache Ratis based replication to Ozone. Apache Ratis is a java > implementation of Raft protocol. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12407: -- Attachment: HDFS-12407.01.patch > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12407.01.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12407: -- Status: Patch Available (was: In Progress) > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12407.01.patch > > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Attachment: HDFS-12386-1.patch 1. bq. What happens when a new WebHdfsFileSystem instance tries to talk to an older namenode? Good catch. Added the logic to handle the case when new client talking to old namenode and added a test case for that too. 2. Fixed checkstyle warnings except following one. {noformat} ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java:1141: case GETSERVERDEFAULTS: {:29: Avoid nested blocks. [AvoidNestedBlocks] {noformat} Just followed the pattern among the other switch case statement. 3. Regarding javac warning. {noformat} [WARNING] /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java:[1825,26] [deprecation] getServerDefaults() in FileSystem has been deprecated {noformat} DistributedFileSystem also overrides {{getServerDefaults()}} so kept it as it is. 4. Regarding test failures. All the erasure coding related test failures are fairly consistent. They are failing in almost all the builds. Following are the test cases other than EC related ones. TestDirectoryScanner, TestJournalNodeSync, TestClientProtocolForPipelineRecovery {noformat} Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeSync Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.89 sec - in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeSync Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 151.758 sec - in org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 78.374 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery testZeroByteBlockRecovery(org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery) Time elapsed: 12.584 sec <<< ERROR! java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:56534,DS-b0ecc785-b07e-4f09-8aac-62eb31911401,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:56534,DS-b0ecc785-b07e-4f09-8aac-62eb31911401,DISK]]). The current failed datanode replacement policy is ALWAYS, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1317) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1387) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1586) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1487) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1469) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1273) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:684) {noformat} TestClientProtocolForPipelineRecovery#testZeroByteBlockRecovery fails even without my patch also. So all the test failures are unrelated. > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.
[ https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rushabh S Shah updated HDFS-12386: -- Status: Patch Available (was: Open) > Add fsserver defaults call to WebhdfsFileSystem. > > > Key: HDFS-12386 > URL: https://issues.apache.org/jira/browse/HDFS-12386 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Minor > Attachments: HDFS-12386-1.patch, HDFS-12386.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
[ https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158935#comment-16158935 ] Hadoop QA commented on HDFS-12235: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 2s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 12 unchanged - 2 fixed = 12 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}147m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFile | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.ozone.web.client.TestKeys | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | |
[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism
[ https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158934#comment-16158934 ] Chen Liang commented on HDFS-12387: --- Thanks Anu for the patch! I'm still reading through the patch, just one random comment for now. Why is {{ContainerStateManager#containers}} using PriorityQueue? Seems only add and remove are being used, so maybe using just a HashSet rather than PriorityQueue? > Ozone: Support Ratis as a first class replication mechanism > --- > > Key: HDFS-12387 > URL: https://issues.apache.org/jira/browse/HDFS-12387 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-12387-HDFS-7240.001.patch > > > Ozone container layer supports pluggable replication policies. This JIRA > brings Apache Ratis based replication to Ozone. Apache Ratis is a java > implementation of Raft protocol. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12273: --- Attachment: HDFS-12273-HDFS-10467-003.patch Rebased after HDFS-12335. > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12335) Federation Metrics
[ https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158889#comment-16158889 ] Íñigo Goiri commented on HDFS-12335: Committed into HDFS-10467. Thanks [~chris.douglas] for the review! > Federation Metrics > -- > > Key: HDFS-12335 > URL: https://issues.apache.org/jira/browse/HDFS-12335 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Giovanni Matteo Fumarola >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12335-HDFS-10467-000.patch, > HDFS-12335-HDFS-10467-001.patch, HDFS-12335-HDFS-10467-002.patch, > HDFS-12335-HDFS-10467-003.patch, HDFS-12335-HDFS-10467-004.patch, > HDFS-12335-HDFS-10467-005.patch, HDFS-12335-HDFS-10467.006.patch, > HDFS-12335-HDFS-10467.007.patch, HDFS-12335-HDFS-10467.008.patch, > HDFS-12335-HDFS-10467.009.patch, HDFS-12335-HDFS-10467.010.patch, > HDFS-12335-HDFS-10467.011.patch, HDFS-12335-HDFS-10467.012.patch > > > Add metrics for the Router and the State Store. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12335) Federation Metrics
[ https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12335: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Federation Metrics > -- > > Key: HDFS-12335 > URL: https://issues.apache.org/jira/browse/HDFS-12335 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Giovanni Matteo Fumarola >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12335-HDFS-10467-000.patch, > HDFS-12335-HDFS-10467-001.patch, HDFS-12335-HDFS-10467-002.patch, > HDFS-12335-HDFS-10467-003.patch, HDFS-12335-HDFS-10467-004.patch, > HDFS-12335-HDFS-10467-005.patch, HDFS-12335-HDFS-10467.006.patch, > HDFS-12335-HDFS-10467.007.patch, HDFS-12335-HDFS-10467.008.patch, > HDFS-12335-HDFS-10467.009.patch, HDFS-12335-HDFS-10467.010.patch, > HDFS-12335-HDFS-10467.011.patch, HDFS-12335-HDFS-10467.012.patch > > > Add metrics for the Router and the State Store. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12404) Rename hdfs config authorization.provider.bypass.users to attributes.provider.bypass.users
[ https://issues.apache.org/jira/browse/HDFS-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HDFS-12404: - Description: HDFS-12357 introduced a new config 'dfs.namenode.inode.attributes.provider.bypass.users' to allow configured users to bypass external attributes provider. However, after committing HDFS-12357, noticed that a different config name 'dfs.namenode.authorization.provider.bypass.users' is put into hdfs-default.xml, which is different than what used in java code. And this is not correct. Creating this jira to fix the hdfs-default.xml one to be consistent with the implementation was: HDFS-12357 introduced a new config 'dfs.namenode.authorization.provider.bypass.users' to allow users to skip external attributes provider totally. Name of the config needs to be 'dfs.namenode.inode.attributes.provider.bypass.users' for correctness purposes and to align with the existing config 'dfs.namenode.inode.attributes.provider.class'. > Rename hdfs config authorization.provider.bypass.users to > attributes.provider.bypass.users > -- > > Key: HDFS-12404 > URL: https://issues.apache.org/jira/browse/HDFS-12404 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.0.0-beta1 >Reporter: Yongjun Zhang >Assignee: Manoj Govindassamy > Fix For: 3.0.0-beta1 > > Attachments: HDFS-12404.01.patch > > > HDFS-12357 introduced a new config > 'dfs.namenode.inode.attributes.provider.bypass.users' to allow configured > users to bypass external attributes provider. However, after committing > HDFS-12357, noticed that a different config name > 'dfs.namenode.authorization.provider.bypass.users' is put into > hdfs-default.xml, which is different than what used in java code. And this is > not correct. > Creating this jira to fix the hdfs-default.xml one to be consistent with the > implementation -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Closed] (HDFS-12326) What is the correct way of retrying when failure occurs during writing
[ https://issues.apache.org/jira/browse/HDFS-12326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal closed HDFS-12326. > What is the correct way of retrying when failure occurs during writing > -- > > Key: HDFS-12326 > URL: https://issues.apache.org/jira/browse/HDFS-12326 > Project: Hadoop HDFS > Issue Type: Test > Components: hdfs-client >Reporter: ZhangBiao > > I'm using hdfs client for golang https://github.com/colinmarc/hdfs to write > to the hdfs. And I'm using hadoop 2.7.3 > When the number of files concurrently being opened is larger, for example > 200. I'll always get the 'broken pipe' error. > So I want to retry to continue writing. What is the correct way of retrying? > Because https://github.com/colinmarc/hdfs hasn't been able to recover the > stream status when an error occurs duing writing, so I have to reopen and get > a new stream. So I tried the following steps: > 1 close the current stream > 2 Append the file to get a new stream > But when I close the stream, I got the error "updateBlockForPipeline call > failed with ERROR_APPLICATION (java.io.IOException" > and it seems the namenode complains: > {code:java} > 2017-08-20 03:22:55,598 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 2 on 9000, call > org.apache.hadoop.hdfs.protocol.ClientProtocol.updateBlockForPipeline from > 192.168.0.39:46827 Call#50183 Retry#-1 > java.io.IOException: > BP-1152809458-192.168.0.39-1502261411064:blk_1073825071_111401 does not exist > or is not under Constructionblk_1073825071_111401{UCState=COMMITTED, > truncateBlock=null, primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-d61914ba-df64-467b-bb75-272875e5e865:NORMAL:192.168.0.39:50010|RBW], > > ReplicaUC[[DISK]DS-1314debe-ab08-4001-ab9a-8e234f28f87c:NORMAL:192.168.0.38:50010|RBW]]} > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6241) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6309) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:806) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) > 2017-08-20 03:22:56,333 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* > blk_1073825071_111401{UCState=COMMITTED, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-d61914ba-df64-467b-bb75-272875e5e865:NORMAL:192.168.0.39:50010|RBW], > > ReplicaUC[[DISK]DS-1314debe-ab08-4001-ab9a-8e234f28f87c:NORMAL:192.168.0.38:50010|RBW]]} > is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in > file > /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log > {code} > when I Appended to get a new stream, I got the error 'append call failed with > ERROR_APPLICATION > (org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException)', and the > corresponding error in namenode is: > {code:java} > 2017-08-20 03:22:56,335 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.append: Failed to APPEND_FILE > /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log > for go-hdfs-OAfvZiSUM2Eu894p on 192.168.0.39 because > go-hdfs-OAfvZiSUM2Eu894p is already the current lease holder. > 2017-08-20 03:22:56,335 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 0 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.append from > 192.168.0.39:46827 Call#50186 Retry#-1: > org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to > APPEND_FILE > /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log > for go-hdfs-OAfvZiSUM2Eu894p on 192.168.0.39 because > go-hdfs-OAfvZiSUM2Eu894p is already the current lease holder. > {code} > Could you please suggest the correct way of retrying of the client side when > write fails? -- This
[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x
[ https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158780#comment-16158780 ] Sean Mackrory commented on HDFS-11096: -- Thanks for the reviews. I was hoping you'd take a look [~aw]! I'll update the patch and address these comments soon. I've also been reviewing more recent JACC reports. There are still a few incompatibilities that technically violate the contract that I mentioned above, like metrics being replace by metrics2, s3:// disappearing entirely, but neither being labelled as deprecated for all of 2.x, some things that should not have been used publicly (like LOGs) changing data types, etc. These are things that from a practical standpoint, have been known about by many for a long time and no concern has been raised, and there's significant baggage to addressing them. Does anybody think they warrant further action? I'm inclined to say no... > Support rolling upgrade between 2.x and 3.x > --- > > Key: HDFS-11096 > URL: https://issues.apache.org/jira/browse/HDFS-11096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rolling upgrades >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Sean Mackrory >Priority: Blocker > Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch > > > trunk has a minimum software version of 3.0.0-alpha1. This means we can't > rolling upgrade between branch-2 and trunk. > This is a showstopper for large deployments. Unless there are very compelling > reasons to break compatibility, let's restore the ability to rolling upgrade > to 3.x releases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12335) Federation Metrics
[ https://issues.apache.org/jira/browse/HDFS-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158782#comment-16158782 ] Chris Douglas commented on HDFS-12335: -- bq. The only one would be HdfsClientConfigKeys but it complains about dfs.namenode.inode.attributes.provider.bypass.users Yes, this was resolved in HDFS-12404 Still lgtm, +1 > Federation Metrics > -- > > Key: HDFS-12335 > URL: https://issues.apache.org/jira/browse/HDFS-12335 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Giovanni Matteo Fumarola >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12335-HDFS-10467-000.patch, > HDFS-12335-HDFS-10467-001.patch, HDFS-12335-HDFS-10467-002.patch, > HDFS-12335-HDFS-10467-003.patch, HDFS-12335-HDFS-10467-004.patch, > HDFS-12335-HDFS-10467-005.patch, HDFS-12335-HDFS-10467.006.patch, > HDFS-12335-HDFS-10467.007.patch, HDFS-12335-HDFS-10467.008.patch, > HDFS-12335-HDFS-10467.009.patch, HDFS-12335-HDFS-10467.010.patch, > HDFS-12335-HDFS-10467.011.patch, HDFS-12335-HDFS-10467.012.patch > > > Add metrics for the Router and the State Store. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11821) BlockManager.getMissingReplOneBlocksCount() does not report correct value if corrupt file with replication factor of 1 gets deleted
[ https://issues.apache.org/jira/browse/HDFS-11821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158762#comment-16158762 ] Wellington Chevreuil commented on HDFS-11821: - Thanks for the review and relevant comments [~raviprak]! Regarding the tests, I have these passing locally. Also, apart from *hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080*, these are totally different from previous build. Also, I don't think the patch changes would influence pipeline/append/reads, as it's only concerning to metrics update. Pasting test output snippets from my local build: {noformat} --- T E S T S --- Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.39 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 Results : Tests run: 14, Failures: 0, Errors: 0, Skipped: 0 ... --- T E S T S --- Running org.apache.hadoop.hdfs.TestFileAppendRestart Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.935 sec - in org.apache.hadoop.hdfs.TestFileAppendRestart Results : Tests run: 3, Failures: 0, Errors: 0, Skipped: 0 ... Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.347 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 Results : Tests run: 14, Failures: 0, Errors: 0, Skipped: 0 ... --- T E S T S --- Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.274 sec - in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding Results : Tests run: 5, Failures: 0, Errors: 0, Skipped: 0 ... --- T E S T S --- Running org.apache.hadoop.hdfs.TestLeaseRecoveryStriped Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.307 sec - in org.apache.hadoop.hdfs.TestLeaseRecoveryStriped Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 ... --- T E S T S --- Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.726 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 Results : Tests run: 14, Failures: 0, Errors: 0, Skipped: 0 ... --- T E S T S --- Running org.apache.hadoop.hdfs.TestPipelines Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.92 sec - in org.apache.hadoop.hdfs.TestPipelines Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 ... --- T E S T S --- Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.293 sec - in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 Results : Tests run: 14, Failures: 0, Errors: 0, Skipped: 0 ... --- T E S T S --- Running org.apache.hadoop.hdfs.TestWriteReadStripedFile Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 73.311 sec - in org.apache.hadoop.hdfs.TestWriteReadStripedFile Results : Tests run: 17, Failures: 0, Errors: 0, Skipped: 0 {noformat} bq. My concern with your patch is that remove will now be a bit slower. I think I remember there used to be a time when deletes were holding up the lock for a long time. Kihwal Lee Do you have an objection? I guess the major concern here is that *countNodes* method iterates over the block *StorageInfo* objects, checking the replica state on each storage to decide how is replication health. I suppose this is limited by the block replication factor, so it wouldn't be large loop. Is this a correct assumption or would there still be some other overheads that could impact delete performance? bq. I'm also wondering what happens when the information returned by countNodes is inaccurate (i.e. HDFS hasn't yet realized that the block is corrupt) In that case, I believe the block would not be on any of the priority queues from *LowRedundancyBlocks*. Call to *LowRedundancyBlocks.remove* would not find any block then, so no counters would be updated,
[jira] [Work started] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12407 started by Ajay Kumar. - > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
[ https://issues.apache.org/jira/browse/HDFS-12407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar reassigned HDFS-12407: - Assignee: Ajay Kumar > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start > --- > > Key: HDFS-12407 > URL: https://issues.apache.org/jira/browse/HDFS-12407 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar > > Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or > JournalNodeRpcServer fails to start. > Steps to recreate the issue: > # Change the http port for JournalNodeHttpServerr to some port which is > already in use > {code}dfs.journalnode.http-address{code} > # Start the journalnode. JournalNodeHttpServer start will fail with bind > exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12407) Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start
Ajay Kumar created HDFS-12407: - Summary: Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start Key: HDFS-12407 URL: https://issues.apache.org/jira/browse/HDFS-12407 Project: Hadoop HDFS Issue Type: Bug Reporter: Ajay Kumar Journal nodes fails to shutdown cleanly if JournalNodeHttpServer or JournalNodeRpcServer fails to start. Steps to recreate the issue: # Change the http port for JournalNodeHttpServerr to some port which is already in use {code}dfs.journalnode.http-address{code} # Start the journalnode. JournalNodeHttpServer start will fail with bind exception while journalnode process will continue to run. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12397) Ozone: KSM: multiple delete methods in KSMMetadataManager
[ https://issues.apache.org/jira/browse/HDFS-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12397: -- Status: Patch Available (was: Open) > Ozone: KSM: multiple delete methods in KSMMetadataManager > - > > Key: HDFS-12397 > URL: https://issues.apache.org/jira/browse/HDFS-12397 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12397-HDFS-7240.000.patch > > > {{KSMMetadataManager}} has two delete methods which does the same thing. > * {{void delete(byte[] key) throws IOException}} > * {{void deleteKey(byte[] key) throws IOException}} > One can be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12397) Ozone: KSM: multiple delete methods in KSMMetadataManager
[ https://issues.apache.org/jira/browse/HDFS-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12397: -- Attachment: HDFS-12397-HDFS-7240.000.patch > Ozone: KSM: multiple delete methods in KSMMetadataManager > - > > Key: HDFS-12397 > URL: https://issues.apache.org/jira/browse/HDFS-12397 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > Labels: ozoneMerge > Attachments: HDFS-12397-HDFS-7240.000.patch > > > {{KSMMetadataManager}} has two delete methods which does the same thing. > * {{void delete(byte[] key) throws IOException}} > * {{void deleteKey(byte[] key) throws IOException}} > One can be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12294) Let distcp to bypass external attribute provider when calling getFileStatus etc at source cluster
[ https://issues.apache.org/jira/browse/HDFS-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158741#comment-16158741 ] Yongjun Zhang commented on HDFS-12294: -- Thanks a lot [~chris.douglas]. This issue is addressed by HDFS-12357 with a different approach. > Let distcp to bypass external attribute provider when calling getFileStatus > etc at source cluster > - > > Key: HDFS-12294 > URL: https://issues.apache.org/jira/browse/HDFS-12294 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > > This is an alternative solution for HDFS-12202, which proposed introducing a > new set of API, with an additional boolean parameter bypassExtAttrProvider, > so to let NN bypass external attribute provider when getFileStatus. The goal > is to avoid distcp from copying attributes from one cluster's external > attribute provider and save to another cluster's fsimage. > The solution here is, instead of having an additional parameter, encode this > parameter to the path itself, when calling getFileStatus (and some other > calls), NN will parse the path, and figure out that whether external > attribute provider need to be bypassed. The suggested encoding is to have a > prefix to the path before calling getFileStatus, e.g. /ab/c becomes > /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning. > Thanks much to [~andrew.wang] for this suggestion. The scope of change is > smaller and we don't have to change the FileSystem APIs. > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12296) Add a field to FsServerDefaults to tell if external attribute provider is enabled
[ https://issues.apache.org/jira/browse/HDFS-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang resolved HDFS-12296. -- Resolution: Won't Fix > Add a field to FsServerDefaults to tell if external attribute provider is > enabled > - > > Key: HDFS-12296 > URL: https://issues.apache.org/jira/browse/HDFS-12296 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr
[ https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HDFS-12295: - Resolution: Won't Fix Status: Resolved (was: Patch Available) > NameNode to support file path prefix /.reserved/bypassExtAttr > - > > Key: HDFS-12295 > URL: https://issues.apache.org/jira/browse/HDFS-12295 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs, namenode >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HDFS-12295.001.patch, HDFS-12295.001.patch > > > Let NameNode to support prefix /.reserved/bypassExtAttr, so client can add > thisprefix to a path before calling getFileStatus, e.g. /ab/c becomes > /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning, > and bypass external attribute provider if the prefix is there. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12294) Let distcp to bypass external attribute provider when calling getFileStatus etc at source cluster
[ https://issues.apache.org/jira/browse/HDFS-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang resolved HDFS-12294. -- Resolution: Won't Fix > Let distcp to bypass external attribute provider when calling getFileStatus > etc at source cluster > - > > Key: HDFS-12294 > URL: https://issues.apache.org/jira/browse/HDFS-12294 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > > This is an alternative solution for HDFS-12202, which proposed introducing a > new set of API, with an additional boolean parameter bypassExtAttrProvider, > so to let NN bypass external attribute provider when getFileStatus. The goal > is to avoid distcp from copying attributes from one cluster's external > attribute provider and save to another cluster's fsimage. > The solution here is, instead of having an additional parameter, encode this > parameter to the path itself, when calling getFileStatus (and some other > calls), NN will parse the path, and figure out that whether external > attribute provider need to be bypassed. The suggested encoding is to have a > prefix to the path before calling getFileStatus, e.g. /ab/c becomes > /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning. > Thanks much to [~andrew.wang] for this suggestion. The scope of change is > smaller and we don't have to change the FileSystem APIs. > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12406) dfsadmin command prints "Exception encountered" even if there is no exception, when debug is enabled
[ https://issues.apache.org/jira/browse/HDFS-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12406: -- Status: Patch Available (was: Open) > dfsadmin command prints "Exception encountered" even if there is no > exception, when debug is enabled > - > > Key: HDFS-12406 > URL: https://issues.apache.org/jira/browse/HDFS-12406 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12406.000.patch > > > In DFSAdmin we are printing {{"Exception encountered"}} at debug level for > all the calls even if there is no exception. > {code:title=DFSAdmin#run} > if (LOG.isDebugEnabled()) { > LOG.debug("Exception encountered:", debugException); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12406) dfsadmin command prints "Exception encountered" even if there is no exception, when debug is enabled
[ https://issues.apache.org/jira/browse/HDFS-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12406: -- Attachment: HDFS-12406.000.patch > dfsadmin command prints "Exception encountered" even if there is no > exception, when debug is enabled > - > > Key: HDFS-12406 > URL: https://issues.apache.org/jira/browse/HDFS-12406 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12406.000.patch > > > In DFSAdmin we are printing {{"Exception encountered"}} at debug level for > all the calls even if there is no exception. > {code:title=DFSAdmin#run} > if (LOG.isDebugEnabled()) { > LOG.debug("Exception encountered:", debugException); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12406) dfsadmin command prints "Exception encountered" even if there is no exception, when debug is enabled
Nandakumar created HDFS-12406: - Summary: dfsadmin command prints "Exception encountered" even if there is no exception, when debug is enabled Key: HDFS-12406 URL: https://issues.apache.org/jira/browse/HDFS-12406 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Reporter: Nandakumar Assignee: Nandakumar Priority: Minor In DFSAdmin we are printing {{"Exception encountered"}} at debug level for all the calls even if there is no exception. {code:title=DFSAdmin#run} if (LOG.isDebugEnabled()) { LOG.debug("Exception encountered:", debugException); } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions
[ https://issues.apache.org/jira/browse/HDFS-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-12235: --- Attachment: HDFS-12235-HDFS-7240.011.patch Fixed minor checkstyle issue in v11 patch. > Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions > --- > > Key: HDFS-12235 > URL: https://issues.apache.org/jira/browse/HDFS-12235 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: ozoneMerge > Attachments: HDFS-12235-HDFS-7240.001.patch, > HDFS-12235-HDFS-7240.002.patch, HDFS-12235-HDFS-7240.003.patch, > HDFS-12235-HDFS-7240.004.patch, HDFS-12235-HDFS-7240.005.patch, > HDFS-12235-HDFS-7240.006.patch, HDFS-12235-HDFS-7240.007.patch, > HDFS-12235-HDFS-7240.008.patch, HDFS-12235-HDFS-7240.009.patch, > HDFS-12235-HDFS-7240.010.patch, HDFS-12235-HDFS-7240.011.patch > > > KSM and SCM interaction for delete key operation, both KSM and SCM stores key > state info in a backlog, KSM needs to scan this log and send block-deletion > command to SCM, once SCM is fully aware of the message, KSM removes the key > completely from namespace. See more from the design doc under HDFS-11922, > this is task break down 2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode
[ https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158615#comment-16158615 ] Hadoop QA commented on HDFS-7859: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 60 unchanged - 0 fixed = 62 total (was 60) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 1s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestFileAppend4 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDatanodeLayoutUpgrade | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | |
[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests
[ https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158558#comment-16158558 ] Hadoop QA commented on HDFS-12268: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 49s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 35s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestSafeModeWithStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.web.TestWebHdfsTokens | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.ozone.web.client.TestKeys | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.web.TestWebHDFSForHA | | |
[jira] [Commented] (HDFS-11754) Make FsServerDefaults cache configurable.
[ https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158522#comment-16158522 ] Hadoop QA commented on HDFS-11754: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDataTransferProtocol | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestEncryptionZones | | |
[jira] [Commented] (HDFS-12398) Use JUnit Paramaterized test suite in TestWriteReadStripedFile
[ https://issues.apache.org/jira/browse/HDFS-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158483#comment-16158483 ] Hadoop QA commented on HDFS-12398: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 405 unchanged - 3 fixed = 405 total (was 408) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 0 unchanged - 7 fixed = 0 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}119m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestLargeBlock | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12398 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12886026/HDFS-12398.002.patch | | Optional Tests | asflicense compile javac
[jira] [Updated] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode
[ https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-7859: Attachment: HDFS-7859.016.patch > Erasure Coding: Persist erasure coding policies in NameNode > --- > > Key: HDFS-7859 > URL: https://issues.apache.org/jira/browse/HDFS-7859 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, > HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, > HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, > HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, > HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859.015.patch, > HDFS-7859.016.patch, HDFS-7859-HDFS-7285.002.patch, > HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch > > > In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we > persist EC schemas in NameNode centrally and reliably, so that EC zones can > reference them by name efficiently. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12268) Ozone: Add metrics for pending storage container requests
[ https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-12268: - Attachment: HDFS-12268-HDFS-7240.006.patch > Ozone: Add metrics for pending storage container requests > - > > Key: HDFS-12268 > URL: https://issues.apache.org/jira/browse/HDFS-12268 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: ozoneMerge > Attachments: HDFS-12268-HDFS-7240.001.patch, > HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, > HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch, > HDFS-12268-HDFS-7240.006.patch > > > As storage container async interface has been supported after HDFS-11580, we > need to keep an eye on the queue depth of pending container requests. It can > help us better found if there are some performance problems. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12268) Ozone: Add metrics for pending storage container requests
[ https://issues.apache.org/jira/browse/HDFS-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158403#comment-16158403 ] Yiqun Lin commented on HDFS-12268: -- Attach the new patch. The metric instance creation logic seems not right. Make a minor change. > Ozone: Add metrics for pending storage container requests > - > > Key: HDFS-12268 > URL: https://issues.apache.org/jira/browse/HDFS-12268 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Labels: ozoneMerge > Attachments: HDFS-12268-HDFS-7240.001.patch, > HDFS-12268-HDFS-7240.002.patch, HDFS-12268-HDFS-7240.003.patch, > HDFS-12268-HDFS-7240.004.patch, HDFS-12268-HDFS-7240.005.patch > > > As storage container async interface has been supported after HDFS-11580, we > need to keep an eye on the queue depth of pending container requests. It can > help us better found if there are some performance problems. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log
[ https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158371#comment-16158371 ] Hadoop QA commented on HDFS-12395: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 3s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 13s{color} | {color:orange} root: The patch generated 4 new + 1055 unchanged - 0 fixed = 1059 total (was 1055) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-client-modules {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-client-modules in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}199m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.web.TestWebHdfsTokens | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | |
[jira] [Commented] (HDFS-11754) Make FsServerDefaults cache configurable.
[ https://issues.apache.org/jira/browse/HDFS-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158369#comment-16158369 ] Mikhail Erofeev commented on HDFS-11754: Hi [~surendrasingh], [~shahrs87], it's been a while since the last update, don't you mind to take a look at my patch again, please? Thank you! > Make FsServerDefaults cache configurable. > - > > Key: HDFS-11754 > URL: https://issues.apache.org/jira/browse/HDFS-11754 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Rushabh S Shah >Assignee: Mikhail Erofeev >Priority: Minor > Labels: newbie > Fix For: 2.9.0 > > Attachments: HDFS-11754.001.patch, HDFS-11754.002.patch, > HDFS-11754.003.patch, HDFS-11754.004.patch > > > DFSClient caches the result of FsServerDefaults for 60 minutes. > But the 60 minutes time is not configurable. > Continuing the discussion from HDFS-11702, it would be nice if we can make > this configurable and make the default as 60 minutes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12398) Use JUnit Paramaterized test suite in TestWriteReadStripedFile
[ https://issues.apache.org/jira/browse/HDFS-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HDFS-12398: Attachment: HDFS-12398.002.patch > Use JUnit Paramaterized test suite in TestWriteReadStripedFile > -- > > Key: HDFS-12398 > URL: https://issues.apache.org/jira/browse/HDFS-12398 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test >Reporter: Huafeng Wang >Assignee: Huafeng Wang >Priority: Trivial > Attachments: HDFS-12398.001.patch, HDFS-12398.002.patch > > > The TestWriteReadStripedFile is basically doing the full product of file size > with data node failure or not. It's better to use JUnit Paramaterized test > suite. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12292) Federation: Support viewfs:// schema path for DfsAdmin commands
[ https://issues.apache.org/jira/browse/HDFS-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158366#comment-16158366 ] Mikhail Erofeev commented on HDFS-12292: HI [~msingh], [~vagarychen], [~xkrogen], it's been a while since the last update, don't you mind to take a look at my patch again, please? Thank you! > Federation: Support viewfs:// schema path for DfsAdmin commands > --- > > Key: HDFS-12292 > URL: https://issues.apache.org/jira/browse/HDFS-12292 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation >Reporter: Mikhail Erofeev >Assignee: Mikhail Erofeev > Attachments: HDFS-12292-002.patch, HDFS-12292-003.patch, > HDFS-12292-004.patch, HDFS-12292.patch > > > Motivation: > As of now, clients need to specify a nameservice when a cluster is federated, > otherwise, the exception is fired: > {code} > hdfs dfsadmin -setQuota 10 viewfs://vfs-root/user/uname > setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system > # with fs.defaultFS = viewfs://vfs-root/ > hdfs dfsadmin -setQuota 10 vfs-root/user/uname > setQuota: FileSystem viewfs://vfs-root/ is not an HDFS file system > # works fine thanks to https://issues.apache.org/jira/browse/HDFS-11432 > hdfs dfsadmin -setQuota 10 hdfs://users-fs/user/uname > {code} > This creates inconvenience, inability to rely on fs.defaultFS and forces to > create client-side mappings for management scripts > Implementation: > PathData that is passed to commands should be resolved to its actual > FileSystem > Result: > ViewFS will be resolved to the actual HDFS file system -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12326) What is the correct way of retrying when failure occurs during writing
[ https://issues.apache.org/jira/browse/HDFS-12326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HDFS-12326. - Resolution: Not A Problem It seems like a question, not a bug. > What is the correct way of retrying when failure occurs during writing > -- > > Key: HDFS-12326 > URL: https://issues.apache.org/jira/browse/HDFS-12326 > Project: Hadoop HDFS > Issue Type: Test > Components: hdfs-client >Reporter: ZhangBiao > > I'm using hdfs client for golang https://github.com/colinmarc/hdfs to write > to the hdfs. And I'm using hadoop 2.7.3 > When the number of files concurrently being opened is larger, for example > 200. I'll always get the 'broken pipe' error. > So I want to retry to continue writing. What is the correct way of retrying? > Because https://github.com/colinmarc/hdfs hasn't been able to recover the > stream status when an error occurs duing writing, so I have to reopen and get > a new stream. So I tried the following steps: > 1 close the current stream > 2 Append the file to get a new stream > But when I close the stream, I got the error "updateBlockForPipeline call > failed with ERROR_APPLICATION (java.io.IOException" > and it seems the namenode complains: > {code:java} > 2017-08-20 03:22:55,598 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 2 on 9000, call > org.apache.hadoop.hdfs.protocol.ClientProtocol.updateBlockForPipeline from > 192.168.0.39:46827 Call#50183 Retry#-1 > java.io.IOException: > BP-1152809458-192.168.0.39-1502261411064:blk_1073825071_111401 does not exist > or is not under Constructionblk_1073825071_111401{UCState=COMMITTED, > truncateBlock=null, primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-d61914ba-df64-467b-bb75-272875e5e865:NORMAL:192.168.0.39:50010|RBW], > > ReplicaUC[[DISK]DS-1314debe-ab08-4001-ab9a-8e234f28f87c:NORMAL:192.168.0.38:50010|RBW]]} > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6241) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6309) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:806) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) > 2017-08-20 03:22:56,333 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* > blk_1073825071_111401{UCState=COMMITTED, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-d61914ba-df64-467b-bb75-272875e5e865:NORMAL:192.168.0.39:50010|RBW], > > ReplicaUC[[DISK]DS-1314debe-ab08-4001-ab9a-8e234f28f87c:NORMAL:192.168.0.38:50010|RBW]]} > is not COMPLETE (ucState = COMMITTED, replication# = 0 < minimum = 1) in > file > /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log > {code} > when I Appended to get a new stream, I got the error 'append call failed with > ERROR_APPLICATION > (org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException)', and the > corresponding error in namenode is: > {code:java} > 2017-08-20 03:22:56,335 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.append: Failed to APPEND_FILE > /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log > for go-hdfs-OAfvZiSUM2Eu894p on 192.168.0.39 because > go-hdfs-OAfvZiSUM2Eu894p is already the current lease holder. > 2017-08-20 03:22:56,335 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 0 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.append from > 192.168.0.39:46827 Call#50186 Retry#-1: > org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to > APPEND_FILE > /user/am/scan_task/2017-08-20/192.168.0.38_audience_f/user-bak010-20170820030804.log > for go-hdfs-OAfvZiSUM2Eu894p on 192.168.0.39 because > go-hdfs-OAfvZiSUM2Eu894p is already the current lease holder. > {code} > Could you please suggest the correct
[jira] [Commented] (HDFS-12398) Use JUnit Paramaterized test suite in TestWriteReadStripedFile
[ https://issues.apache.org/jira/browse/HDFS-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158265#comment-16158265 ] Hadoop QA commented on HDFS-12398: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 47s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 405 unchanged - 3 fixed = 408 total (was 408) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 23 new + 0 unchanged - 7 fixed = 23 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12398 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885994/HDFS-12398.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f55362a9017a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b3a4d7d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | findbugs |
[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode
[ https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158252#comment-16158252 ] SammiChen commented on HDFS-7859: - Thanks [~eddyxu] and [~drankye] for review the patch and provide very detail suggestions! {quote} Could you consider to use: message ErasureCodingPolicyManagerSection { repeated ErasureCodingPolicyProto policies = 1; } {quote} Sure. {quote} // dd new erasure coding policy ECSchema newSchema = new ECSchema("rs", 5, 3); {quote} The comments actually is "add new erasure coding policy". It's a typo. bq. Checking file / directory that is using this particular policy is a potentially O(n) operation, where n = # of inodes. I feel that it is OK to leave it in fsimage as garbage for now. In the future, we can let the fsimage loading process to handling this garbage, as it is O(n). HDFS-12405 is created to track the permanently delete the policy from system at Namenode restart time. Will start to working on it after beta1. {quote} Regarding to the policy ID design, are there general rules for customize EC policy design? My question is, what is the ID value range can be chosen for a customized policy. Currently the system EC policies use values up to 5. If a customer / vender provides a new EC policy with ID=6, when the next version of Hadoop adding a new EC policy, how do we handle the conflicts (i.e, ID=6 has been used), in fsimage and INode. Or a customer using policies from two vendors, who accidentally use the same IDs. SammiChen could you add some test cases like this as future work? {quote} Here are the general rules for customized EC policy, 1. when user add customized EC policy, user specify codec name, data units number, parity units number, cell size. Policy ID and policy name are automatically generated by system. customized EC policy ID starts from 64, atomic incremented. So generally there will not have 2 policies in the same system has the same policy ID. 2. system built-in policy ID starts from 1 to 63. system policy and customized policy will have different ID range. {quote} Question to Kai Zheng: I thought "dfs.namenode.ec.policies.enabled" should have been removed when adding the API to enable/disable policy. Could this happen before BETA 1? it seems to be a breaking change. If not , do we have a plan to preserve both this key and the capability of adding/removing policies? {quote} like to have inputs from [~andrew.wang]. I'm fine with the thought. bq. Again, could we make the change: ErasureCodingPolicyManagerSection => ErasureCodingSection. Also check related names like loadErasureCodingPolicyManagerSection, saveErasureCodingPolicyManagerSection. There are existing "CacheManagerSection" saves cache directives for CacheManager, "SecretManagerSection" saves secrets for SecretManager. So its better follow the style, use "ErasureCodingPolicyManagerSection" to save the EC policies for ErasureCodingPolicyManager. All other comments will be taken care in next patch. > Erasure Coding: Persist erasure coding policies in NameNode > --- > > Key: HDFS-7859 > URL: https://issues.apache.org/jira/browse/HDFS-7859 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, > HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, > HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, > HDFS-7859.010.patch, HDFS-7859.011.patch, HDFS-7859.012.patch, > HDFS-7859.013.patch, HDFS-7859.014.patch, HDFS-7859.015.patch, > HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, > HDFS-7859-HDFS-7285.003.patch > > > In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we > persist EC schemas in NameNode centrally and reliably, so that EC zones can > reference them by name efficiently. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12405) Clean up removed erasure coding policies from namenode
[ https://issues.apache.org/jira/browse/HDFS-12405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang reassigned HDFS-12405: --- Assignee: Huafeng Wang > Clean up removed erasure coding policies from namenode > -- > > Key: HDFS-12405 > URL: https://issues.apache.org/jira/browse/HDFS-12405 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: Huafeng Wang > Labels: hdfs-ec-3.0-nice-to-have > > Currently, when an erasure coding policy is removed, it's been transited to > "removed" state. User cannot apply policy with "removed" state to > file/directory anymore. The policy cannot be safely removed from the system > unless we know there are no existing files or directories that use this > "removed" policy. To find out whether there are files or directories which > are using the policy is time consuming in runtime and might impact the > Namenode performance. So a better choice is doing the work when NameNode > restarts and loads Inodes. Collecting the information at that time will not > introduce much extra overhead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12405) Clean up removed erasure coding policies from namenode
[ https://issues.apache.org/jira/browse/HDFS-12405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HDFS-12405: - Description: Currently, when an erasure coding policy is removed, it's been transited to "removed" state. User cannot apply policy with "removed" state to file/directory anymore. The policy cannot be safely removed from the system unless we know there are no existing files or directories that use this "removed" policy. To find out whether there are files or directories which are using the policy is time consuming in runtime and might impact the Namenode performance. So a better choice is doing the work when NameNode restarts and loads Inodes. Collecting the information at that time will not introduce much extra overhead. (was: Currently, when an erasure coding policy is removed, it's been transited to "removed" state. User cannot apply policy with "removed" state to file/directory anymore. The policy cannot be safely removed from the system unless we know there is no existing files or directories use this "remove" policy. To find out whether there is files or directories which are using the policy is time consuming in runtime and might impact the Namenode performance. So a better choice is do the work when NameNode restarts and loads Inode one by one. Collect the information at that time will not introduce much extra workloads. ) Summary: Clean up removed erasure coding policies from namenode (was: Complete remove "removed" state erasure coding policy during Namenode restart) > Clean up removed erasure coding policies from namenode > -- > > Key: HDFS-12405 > URL: https://issues.apache.org/jira/browse/HDFS-12405 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen > Labels: hdfs-ec-3.0-nice-to-have > > Currently, when an erasure coding policy is removed, it's been transited to > "removed" state. User cannot apply policy with "removed" state to > file/directory anymore. The policy cannot be safely removed from the system > unless we know there are no existing files or directories that use this > "removed" policy. To find out whether there are files or directories which > are using the policy is time consuming in runtime and might impact the > Namenode performance. So a better choice is doing the work when NameNode > restarts and loads Inodes. Collecting the information at that time will not > introduce much extra overhead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12405) Complete remove "removed" state erasure coding policy during Namenode restart
SammiChen created HDFS-12405: Summary: Complete remove "removed" state erasure coding policy during Namenode restart Key: HDFS-12405 URL: https://issues.apache.org/jira/browse/HDFS-12405 Project: Hadoop HDFS Issue Type: Improvement Components: erasure-coding Reporter: SammiChen Currently, when an erasure coding policy is removed, it's been transited to "removed" state. User cannot apply policy with "removed" state to file/directory anymore. The policy cannot be safely removed from the system unless we know there is no existing files or directories use this "remove" policy. To find out whether there is files or directories which are using the policy is time consuming in runtime and might impact the Namenode performance. So a better choice is do the work when NameNode restarts and loads Inode one by one. Collect the information at that time will not introduce much extra workloads. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12395) Support erasure coding policy operations in namenode edit log
[ https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158171#comment-16158171 ] SammiChen commented on HDFS-12395: -- 002.patch uploaded. Here is the change list, 1. fixed style issues 2. fixed and improved the failed unit tests 3. Rakesh's comments are addressed. Hi [~rakeshr], I'm not very sure about the "Update javadocs" comments, is that because new parameter "logRetryCache" is added for involved API? I separately uploaded the patch and editsStored file. If you want to apply the patch locally, you can first apply the patch, then replace the file under "hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored" with the one in this JIRA. > Support erasure coding policy operations in namenode edit log > - > > Key: HDFS-12395 > URL: https://issues.apache.org/jira/browse/HDFS-12395 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch > > > Support add, remove, disable, enable erasure coding policy operation in edit > log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12395) Support erasure coding policy operations in namenode edit log
[ https://issues.apache.org/jira/browse/HDFS-12395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-12395: - Attachment: HDFS-12395.002.patch > Support erasure coding policy operations in namenode edit log > - > > Key: HDFS-12395 > URL: https://issues.apache.org/jira/browse/HDFS-12395 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Attachments: editsStored, HDFS-12395.001.patch, HDFS-12395.002.patch > > > Support add, remove, disable, enable erasure coding policy operation in edit > log. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org