[jira] [Commented] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204450#comment-16204450 ] Hadoop QA commented on HDFS-12585: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 20s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 39s{color} | {color:green} root generated 0 new + 1274 unchanged - 1 fixed = 1274 total (was 1275) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 5s{color} | {color:orange} root: The patch generated 1 new + 1 unchanged - 3 fixed = 2 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 56s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}197m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.cblock.TestCBlockCLI | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12585 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892175/HDFS-12585-HDFS-7240.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a608e8eabd6b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven
[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204448#comment-16204448 ] Hadoop QA commented on HDFS-12578: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 38s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 37s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}555m 51s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_151. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 10m 30s{color} | {color:red} The patch generated 172 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}650m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_144 Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshot | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | JDK v1.7.0_151 Failed junit tests | hadoop.hdfs.server.namenode.TestFSNamesystemMBean | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.datanode.TestDiskError | | | hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot | | | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | | | hadoop.hdfs.server.namenode.snapshot.TestSetQuotaWithSnapshot | | | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
[jira] [Commented] (HDFS-12556) [SPS] : Block movement analysis should be done in read lock.
[ https://issues.apache.org/jira/browse/HDFS-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204435#comment-16204435 ] Rakesh R commented on HDFS-12556: - Thanks [~surendrasingh] for the patch. It looks like the test case failures are unrelated to the patch. {{TestPersistentStoragePolicySatisfier#testWithRestarts}} is random failure and can be analysed separately. +1 LGTM > [SPS] : Block movement analysis should be done in read lock. > > > Key: HDFS-12556 > URL: https://issues.apache.org/jira/browse/HDFS-12556 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12556-HDFS-10285-01.patch, > HDFS-12556-HDFS-10285-02.patch, HDFS-12556-HDFS-10285-03.patch > > > {noformat} > 2017-09-27 15:58:32,852 [StoragePolicySatisfier] ERROR > namenode.StoragePolicySatisfier > (StoragePolicySatisfier.java:handleException(308)) - StoragePolicySatisfier > thread received runtime exception. Stopping Storage policy satisfier work > java.lang.ArrayIndexOutOfBoundsException: 1 > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getStorages(BlockManager.java:4130) > at > org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.analyseBlocksStorageMovementsAndAssignToDN(StoragePolicySatisfier.java:362) > at > org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.run(StoragePolicySatisfier.java:236) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12614) FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured
[ https://issues.apache.org/jira/browse/HDFS-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204410#comment-16204410 ] Hadoop QA commented on HDFS-12614: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 42s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12614 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892158/HDFS-12614.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 40c6186e2ac2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3fb4718 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21695/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21695/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21695/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message
[jira] [Updated] (HDFS-12662) lost+found strategy for bad/corrupt blocks to improvate data replication 'SLA' for small clusters
[ https://issues.apache.org/jira/browse/HDFS-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gruust updated HDFS-12662: -- Description: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside from receiving a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node from finding a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. Or maybe simply add a configuration option that allows to fix corrupt blocks in-place because harddisks usually internally replace bad sectors on their own and a simple rewrite can often fix those issues. was: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside from receiving a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node from finding a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. Or maybe simply add a configuration option that allows to fix corrupt blocks in-place because harddisks usually internally replace bad sectors on their own and a simple rewrite can fix those issues. > lost+found strategy for bad/corrupt blocks to improvate data replication > 'SLA' for small clusters > - > > Key: HDFS-12662 > URL: https://issues.apache.org/jira/browse/HDFS-12662 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Affects Versions: 2.8.1 >Reporter: Gruust >Priority: Minor > > Corrupt blocks currently need to be removed manually and effectively block > the data node on which they reside from receiving a good copy of the same > block. In small clusters (ie. node count == replication factor), this > prevents the name node from finding a free data node to keep the desired > replication level up until the user manually runs some fsck command to remove > the corrupt block. > I suggest moving the corrupt block out of the way, like it's usually done by > ext2-based filesystems, ie. move the block to /lost+found directory, such > that the name node can replace it immediately. Or maybe simply add a > configuration option that allows to fix corrupt blocks in-place because > harddisks usually internally replace bad sectors on their own and a simple > rewrite can often fix those issues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12662) lost+found strategy for bad/corrupt blocks to improvate data replication 'SLA' for small clusters
[ https://issues.apache.org/jira/browse/HDFS-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gruust updated HDFS-12662: -- Description: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside from receiving a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node from finding a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. Or maybe simply add a configuration option that allows to fix corrupt blocks in-place because harddisks usually internally replace bad sectors on their own and a simple rewrite can fix those issues. was: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside from receiving a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node from finding a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. > lost+found strategy for bad/corrupt blocks to improvate data replication > 'SLA' for small clusters > - > > Key: HDFS-12662 > URL: https://issues.apache.org/jira/browse/HDFS-12662 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Affects Versions: 2.8.1 >Reporter: Gruust >Priority: Minor > > Corrupt blocks currently need to be removed manually and effectively block > the data node on which they reside from receiving a good copy of the same > block. In small clusters (ie. node count == replication factor), this > prevents the name node from finding a free data node to keep the desired > replication level up until the user manually runs some fsck command to remove > the corrupt block. > I suggest moving the corrupt block out of the way, like it's usually done by > ext2-based filesystems, ie. move the block to /lost+found directory, such > that the name node can replace it immediately. Or maybe simply add a > configuration option that allows to fix corrupt blocks in-place because > harddisks usually internally replace bad sectors on their own and a simple > rewrite can fix those issues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12662) lost+found strategy for bad/corrupt blocks to improvate data replication 'SLA' for small clusters
[ https://issues.apache.org/jira/browse/HDFS-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gruust updated HDFS-12662: -- Description: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside from receiving a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node from finding a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. was: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside to receive a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node from finding a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. > lost+found strategy for bad/corrupt blocks to improvate data replication > 'SLA' for small clusters > - > > Key: HDFS-12662 > URL: https://issues.apache.org/jira/browse/HDFS-12662 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Affects Versions: 2.8.1 >Reporter: Gruust >Priority: Minor > > Corrupt blocks currently need to be removed manually and effectively block > the data node on which they reside from receiving a good copy of the same > block. In small clusters (ie. node count == replication factor), this > prevents the name node from finding a free data node to keep the desired > replication level up until the user manually runs some fsck command to remove > the corrupt block. > I suggest moving the corrupt block out of the way, like it's usually done by > ext2-based filesystems, ie. move the block to /lost+found directory, such > that the name node can replace it immediately. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12662) lost+found strategy for bad/corrupt blocks to improvate data replication 'SLA' for small clusters
[ https://issues.apache.org/jira/browse/HDFS-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gruust updated HDFS-12662: -- Description: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside to receive a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node from finding a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. was: Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside to receive a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node to find a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. > lost+found strategy for bad/corrupt blocks to improvate data replication > 'SLA' for small clusters > - > > Key: HDFS-12662 > URL: https://issues.apache.org/jira/browse/HDFS-12662 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement >Affects Versions: 2.8.1 >Reporter: Gruust >Priority: Minor > > Corrupt blocks currently need to be removed manually and effectively block > the data node on which they reside to receive a good copy of the same block. > In small clusters (ie. node count == replication factor), this prevents the > name node from finding a free data node to keep the desired replication level > up until the user manually runs some fsck command to remove the corrupt block. > I suggest moving the corrupt block out of the way, like it's usually done by > ext2-based filesystems, ie. move the block to /lost+found directory, such > that the name node can replace it immediately. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12662) lost+found strategy for bad/corrupt blocks to improvate data replication 'SLA' for small clusters
Gruust created HDFS-12662: - Summary: lost+found strategy for bad/corrupt blocks to improvate data replication 'SLA' for small clusters Key: HDFS-12662 URL: https://issues.apache.org/jira/browse/HDFS-12662 Project: Hadoop HDFS Issue Type: Improvement Components: block placement Affects Versions: 2.8.1 Reporter: Gruust Priority: Minor Corrupt blocks currently need to be removed manually and effectively block the data node on which they reside to receive a good copy of the same block. In small clusters (ie. node count == replication factor), this prevents the name node to find a free data node to keep the desired replication level up until the user manually runs some fsck command to remove the corrupt block. I suggest moving the corrupt block out of the way, like it's usually done by ext2-based filesystems, ie. move the block to /lost+found directory, such that the name node can replace it immediately. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204358#comment-16204358 ] Ajay Kumar commented on HDFS-12578: --- [~xiaochen], thanks for review and commit. > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Fix For: 2.7.5 > > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12612) DFSStripedOutputStream#close will throw if called a second time with a failed streamer
[ https://issues.apache.org/jira/browse/HDFS-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204340#comment-16204340 ] Hadoop QA commented on HDFS-12612: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 82 unchanged - 0 fixed = 83 total (was 82) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client | | | Class org.apache.hadoop.hdfs.DataStreamer$LastException is not derived from an Exception, even though it is named as such At DataStreamer.java:from an Exception, even though it is named as such At DataStreamer.java:[lines 288-314] | | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12612 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892139/HDFS-12612.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall
[jira] [Commented] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204327#comment-16204327 ] Ajay Kumar commented on HDFS-12585: --- Hi [~vagarychen], Removed {{loadDescriptionFromXml}} in patch v4. That will resolve findbug warning as well. > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch, HDFS-12585-HDFS-7240.03.patch, > HDFS-12585-HDFS-7240.04.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12585: -- Attachment: HDFS-12585-HDFS-7240.04.patch > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch, HDFS-12585-HDFS-7240.03.patch, > HDFS-12585-HDFS-7240.04.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12585: -- Attachment: (was: HDFS-12585-HDFS-7240.04.patch) > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch, HDFS-12585-HDFS-7240.03.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12585) Add description for config in Ozone config UI
[ https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12585: -- Attachment: HDFS-12585-HDFS-7240.04.patch > Add description for config in Ozone config UI > - > > Key: HDFS-12585 > URL: https://issues.apache.org/jira/browse/HDFS-12585 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Fix For: HDFS-7240 > > Attachments: HDFS-12585-HDFS-7240.01.patch, > HDFS-12585-HDFS-7240.02.patch, HDFS-12585-HDFS-7240.03.patch, > HDFS-12585-HDFS-7240.04.patch > > > Add description for each config in Ozone config UI -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204320#comment-16204320 ] Hudson commented on HDFS-12553: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13082 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13082/]) HDFS-12553. Add nameServiceId to QJournalProtocol. Contributed by Bharat (arp: rev 8dd1eeb94fef59feaf19182dd8f1fcf1389c7f34) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQJMWithFaults.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocol/QJournalProtocol.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournal.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/QJournalProtocol.proto * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/AsyncLogger.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/QJournalProtocolServerSideTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeSync.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocolPB/QJournalProtocolTranslatorPB.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestEpochsAreUnique.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/protocol/RequestInfo.java > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Improvement > Components: qjm >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: 3.0.0 > > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204302#comment-16204302 ] Hadoop QA commented on HDFS-11902: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} HDFS-9806 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 51s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 31s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 11s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} HDFS-9806 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} HDFS-9806 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 10m 59s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 59s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 7s{color} | {color:orange} root: The patch generated 12 new + 445 unchanged - 16 fixed = 457 total (was 461) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 22s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s{color} | {color:red} hadoop-fs2img in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-11902 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892125/HDFS-11902-HDFS-9806.010.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux f137f6ce6979 3.13.0-117-generic #164-Ubuntu SMP Fri Apr
[jira] [Commented] (HDFS-12603) Enable async edit logging by default
[ https://issues.apache.org/jira/browse/HDFS-12603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204293#comment-16204293 ] Hadoop QA commented on HDFS-12603: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 545 unchanged - 2 fixed = 545 total (was 547) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}156m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}235m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12603 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892118/HDFS-12603.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs
[jira] [Commented] (HDFS-12653) Implement toArray() and subArray() for ReadOnlyList
[ https://issues.apache.org/jira/browse/HDFS-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204273#comment-16204273 ] Manoj Govindassamy commented on HDFS-12653: --- [~daryn], Currently ReadOnlyList is predominantly used by the Directory and Snapshot subsystems for storing their children inodes / snapshots in a _sorted_ order. I see it as a SortedList and many a times the users of this list make use of the sorted nature of the elements for searching - {{ReadOnlyList#Util#binarySearch(ReadOnlyList, K key)}}. On top of this sorted benefits, {{ReadOnlyList#Util#asList()}} gives a {{List}} where {{toArray()}} differs significantly from the Collections toArray -- the returned array is more of a _view_ of the backing read only list, without copying any elements. I believe we can make use of ReadOnlyList for enhancing the performance of {{INodeAttributesProvider#getAttributes()}} by converting byte[][] bPathComponents to ReadOnlyList sPathComponents only one time and getting the _view_ of the string path components using toArray() or subArray(start, end). Collections doesn't have subArray() concept, theres only subList(). > Implement toArray() and subArray() for ReadOnlyList > --- > > Key: HDFS-12653 > URL: https://issues.apache.org/jira/browse/HDFS-12653 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > > {{ReadOnlyList}} today gives an unmodifiable view of the backing List. This > list supports following Util methods for easy construction of read only views > of any given list. > {noformat} > public static ReadOnlyList asReadOnlyList(final List list) > public static List asList(final ReadOnlyList list) > {noformat} > {{asList}} above additionally overrides {{Object[] toArray()}} of the > {{java.util.List}} interface. Unlike the {{java.util.List}}, the above one > returns an array of Objects referring to the backing list and avoid any > copying of objects. Given that we have many usages of read only lists, > 1. Lets have a light-weight / shared-view {{toArray()}} implementation for > {{ReadOnlyList}} as well. > 2. Additionally, similar to {{java.util.List#subList(fromIndex, toIndex)}}, > lets have {{ReadOnlyList#subArray(fromIndex, toIndex)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12553: - Issue Type: Improvement (was: Bug) > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Improvement > Components: qjm >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: 3.0.0 > > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12553: - Component/s: qjm > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Improvement > Components: qjm >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: 3.0.0 > > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12613) Native EC coder should implement release() as idempotent function.
[ https://issues.apache.org/jira/browse/HDFS-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204266#comment-16204266 ] Hadoop QA commented on HDFS-12613: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 23s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 18m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 30s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}124m 17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}234m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.TestFsck | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12613 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892106/HDFS-12613.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient
[jira] [Updated] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445
[ https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-12641: --- Status: Patch Available (was: Open) > Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445 > - > > Key: HDFS-12641 > URL: https://issues.apache.org/jira/browse/HDFS-12641 > Project: Hadoop HDFS > Issue Type: Task >Affects Versions: 2.7.4 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-12641.branch-2.7.001.patch > > > Our internal testing caught a regression in HDFS-11445 when we cherry picked > the commit into CDH. Basically, it produces bogus missing file warnings. > Further analysis revealed that the regression is actually fixed by HDFS-11755. > Because of the order commits are merged in branch-2.8 ~ trunk (HDFS-11755 was > committed before HDFS-11445), the regression was never actually surfaced for > Hadoop 2.8/3.0.0-(alpha/beta) users. Since branch-2.7 has HDFS-11445 but no > HDFS-11755, I suspect the regression is more visible for Hadoop 2.7.4. > I am filing this jira to raise more awareness, than simply backporting > HDFS-11755 into branch-2.7. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12661) Ozone: Support optional documentation link in KSM/SCM webui
[ https://issues.apache.org/jira/browse/HDFS-12661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204248#comment-16204248 ] Hadoop QA commented on HDFS-12661: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 11s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 52m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12661 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892129/HDFS-12661-HDFS-7240.001.patch | | Optional Tests | asflicense shadedclient | | uname | Linux 47f388ded422 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 9ba3357 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21693/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Support optional documentation link in KSM/SCM webui > --- > > Key: HDFS-12661 > URL: https://issues.apache.org/jira/browse/HDFS-12661 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12661-HDFS-7240.001.patch > > > In some cases it could be useful to include additional documentation to the > SCM/KSM web ui. > This patch includes an optional hook. During the startup scm/ksm web ui do a > HTTP HEAD request and if docs/index.html exists, an additional Documentation > link will be displayed in the menu header. > Long term we can generate documentation automatically from the source. > Testing: > Do a full build, start scm: no link in the ui. > Add some optional documentation (choose one): > * hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/scm/docs/index.html > (before build) > * hadoop-hdfs-project/hadoop-hdfs/target/webapps/scm/docs/index.html (after > clean, before build) > * > hadoop-dist/target/hadoop-3.1.0-SNAPSHOT/share/hadoop/hdfs/webapps/scm/docs/index.html > (after build) > And start scm again. There should be a documentation link in the menu which > opens the optional documentation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-12553: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) I've committed this. Thanks for the contribution [~bharatviswa]. > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Fix For: 3.0.0 > > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204213#comment-16204213 ] Arpit Agarwal edited comment on HDFS-12553 at 10/13/17 9:25 PM: +1 on the v11 patch. Looks like reviewboard comments are not reflected on the Jira but my comments on earlier patch revisions are here: https://reviews.apache.org/r/62779/ I will commit this shortly. was (Author: arpitagarwal): +1 on the v11 patch. Looke like reviewboard comments are not reflected here on the Jira but my comments on earlier patch revisions are here: https://reviews.apache.org/r/62779/ I will commit this shortly. > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12578: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.7.5 Status: Resolved (was: Patch Available) Committed this. Thank you for the contribution, [~ajayydv]. > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Fix For: 2.7.5 > > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12614) FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured
[ https://issues.apache.org/jira/browse/HDFS-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12614: -- Attachment: HDFS-12614.04.patch Thanks for the review [~daryn]. Thats right, string literals and constant string expressions are already interned. Attached 04 patch, removing the explicit string intern. Please take a look at the latest revision. > FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider > configured > -- > > Key: HDFS-12614 > URL: https://issues.apache.org/jira/browse/HDFS-12614 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12614.01.patch, HDFS-12614.02.patch, > HDFS-12614.03.patch, HDFS-12614.04.patch, HDFS-12614.test.01.patch > > > When INodeAttributesProvider is configured, and when resolving path (like > "/") and checking for permission, the following code when working on > {{pathByNameArr}} throws NullPointerException. > {noformat} > private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx, > INode inode, int snapshotId) { > INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId); > if (getAttributesProvider() != null) { > String[] elements = new String[pathIdx + 1]; > for (int i = 0; i < elements.length; i++) { > elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); <=== > } > inodeAttrs = getAttributesProvider().getAttributes(elements, > inodeAttrs); > } > return inodeAttrs; > } > {noformat} > Looks like for paths like "/" where the split components based on delimiter > "/" can be null, the pathByNameArr array can have null elements and can throw > NPE. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12596) Add TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt back to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-12596: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.7.5 Status: Resolved (was: Patch Available) > Add TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt back to branch-2.7 > -- > > Key: HDFS-12596 > URL: https://issues.apache.org/jira/browse/HDFS-12596 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 2.7.4 >Reporter: Xiao Chen >Assignee: Xiao Chen > Fix For: 2.7.5 > > Attachments: HDFS-12596.branch-2.7.01.patch > > > {{TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt}} was reverted by > HDFS-11743, but it is unrelated to HDFS-7933 and pretty contained by > HDFS-11445. We should add it back. > See > https://issues.apache.org/jira/browse/HDFS-11743?focusedCommentId=16186328=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16186328 > for details. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204216#comment-16204216 ] Xiao Chen commented on HDFS-12578: -- Pre-commit for the new run is at https://builds.apache.org/job/PreCommit-hdfs-Build/21686/. Emailed common-dev about branch-2.7 failures at 'H9 build slave is bad', but no clear solution yet. This fix is test only and I manually verified the fixed test passing on branch-2.7, committing patch 2. > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12596) Add TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt back to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204215#comment-16204215 ] Xiao Chen commented on HDFS-12596: -- Pre-commit failures unrelated, manually verified the changed test passes. Committing this based on Brahma's +1. Thanks! > Add TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt back to branch-2.7 > -- > > Key: HDFS-12596 > URL: https://issues.apache.org/jira/browse/HDFS-12596 > Project: Hadoop HDFS > Issue Type: Test > Components: test >Affects Versions: 2.7.4 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HDFS-12596.branch-2.7.01.patch > > > {{TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt}} was reverted by > HDFS-11743, but it is unrelated to HDFS-7933 and pretty contained by > HDFS-11445. We should add it back. > See > https://issues.apache.org/jira/browse/HDFS-11743?focusedCommentId=16186328=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16186328 > for details. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445
[ https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-12641: --- Attachment: HDFS-12641.branch-2.7.001.patch Upload a patch. There are a number of conflicts due to the code difference between 2.7 and 2.8. > Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445 > - > > Key: HDFS-12641 > URL: https://issues.apache.org/jira/browse/HDFS-12641 > Project: Hadoop HDFS > Issue Type: Task >Affects Versions: 2.7.4 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-12641.branch-2.7.001.patch > > > Our internal testing caught a regression in HDFS-11445 when we cherry picked > the commit into CDH. Basically, it produces bogus missing file warnings. > Further analysis revealed that the regression is actually fixed by HDFS-11755. > Because of the order commits are merged in branch-2.8 ~ trunk (HDFS-11755 was > committed before HDFS-11445), the regression was never actually surfaced for > Hadoop 2.8/3.0.0-(alpha/beta) users. Since branch-2.7 has HDFS-11445 but no > HDFS-11755, I suspect the regression is more visible for Hadoop 2.7.4. > I am filing this jira to raise more awareness, than simply backporting > HDFS-11755 into branch-2.7. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204213#comment-16204213 ] Arpit Agarwal commented on HDFS-12553: -- +1 on the v11 patch. Looke like reviewboard comments are not reflected here on the Jira but my comments on earlier patch revisions are here: https://reviews.apache.org/r/62779/ I will commit this shortly. > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch, HDFS-12553.08.patch, > HDFS-12553.09.patch, HDFS-12553.10.patch, HDFS-12553.11.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204205#comment-16204205 ] Hadoop QA commented on HDFS-12637: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 28s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 51s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 139 new + 255 unchanged - 139 fixed = 394 total (was 394) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12637 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892030/HDFS-12637.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4d8cb0823d06 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f4fb669 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21689/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21689/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test
[jira] [Commented] (HDFS-12659) Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat recheck interval
[ https://issues.apache.org/jira/browse/HDFS-12659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204177#comment-16204177 ] Hadoop QA commented on HDFS-12659: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}149m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.tracing.TestTracing | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12659 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892110/HDFS-12659.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 57698055d0ed 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f4fb669 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21690/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21690/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21690/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically
[jira] [Commented] (HDFS-12613) Native EC coder should implement release() as idempotent function.
[ https://issues.apache.org/jira/browse/HDFS-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204164#comment-16204164 ] Hadoop QA commented on HDFS-12613: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 39s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 19s{color} | {color:orange} root: The patch generated 1 new + 82 unchanged - 0 fixed = 83 total (was 82) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 0s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}204m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12613 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891835/HDFS-12613.03.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 80f93baba711 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool |
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Patch Available (was: Open) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, > HDFS-11902-HDFS-9806.010.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11902: -- Status: Open (was: Patch Available) > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, > HDFS-11902-HDFS-9806.010.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12612) DFSStripedOutputStream#close will throw if called a second time with a failed streamer
[ https://issues.apache.org/jira/browse/HDFS-12612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12612: - Attachment: HDFS-12612.01.patch Discussed offline with Andrew. Now we consider if there are more than {{dataUnits}} streamer success, then the {{OutputStream}} is success. Also move the {{lastException}} to {{DFSStripedOutputStream}} that can be set by {{abort()}}. > DFSStripedOutputStream#close will throw if called a second time with a failed > streamer > -- > > Key: HDFS-12612 > URL: https://issues.apache.org/jira/browse/HDFS-12612 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Andrew Wang >Assignee: Lei (Eddy) Xu > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-12612.00.patch, HDFS-12612.01.patch > > > Found while testing with Hive. We have a cluster with 2 DNs and the XOR-2-1 > policy. If you write a file and call close() twice, it throws this exception: > {noformat} > 17/10/04 16:02:14 WARN hdfs.DFSOutputStream: Cannot allocate parity > block(index=2, policy=XOR-2-1-1024k). Not enough datanodes? Exclude nodes=[] > ... > Caused by: java.io.IOException: Failed to get parity block, index=2 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:500) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:524) > ~[hadoop-hdfs-client-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > {noformat} > This is because in DFSStripedOutputStream#closeImpl, if the stream is closed, > we throw an exception if any of the striped streamers had an exception: > {code} > protected synchronized void closeImpl() throws IOException { > if (isClosed()) { > final MultipleIOException.Builder b = new MultipleIOException.Builder(); > for(int i = 0; i < streamers.size(); i++) { > final StripedDataStreamer si = getStripedDataStreamer(i); > try { > si.getLastException().check(true); > } catch (IOException e) { > b.add(e); > } > } > final IOException ioe = b.build(); > if (ioe != null) { > throw ioe; > } > return; > } > {code} > I think this is incorrect, since we only need to throw in this situation if > we have too many failed streamers. close should also be idempotent, so it > should throw the first time we call close if it's going to throw at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12661) Ozone: Support optional documentation link in KSM/SCM webui
[ https://issues.apache.org/jira/browse/HDFS-12661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12661: Attachment: HDFS-12661-HDFS-7240.001.patch > Ozone: Support optional documentation link in KSM/SCM webui > --- > > Key: HDFS-12661 > URL: https://issues.apache.org/jira/browse/HDFS-12661 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12661-HDFS-7240.001.patch > > > In some cases it could be useful to include additional documentation to the > SCM/KSM web ui. > This patch includes an optional hook. During the startup scm/ksm web ui do a > HTTP HEAD request and if docs/index.html exists, an additional Documentation > link will be displayed in the menu header. > Long term we can generate documentation automatically from the source. > Testing: > Do a full build, start scm: no link in the ui. > Add some optional documentation (choose one): > * hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/scm/docs/index.html > (before build) > * hadoop-hdfs-project/hadoop-hdfs/target/webapps/scm/docs/index.html (after > clean, before build) > * > hadoop-dist/target/hadoop-3.1.0-SNAPSHOT/share/hadoop/hdfs/webapps/scm/docs/index.html > (after build) > And start scm again. There should be a documentation link in the menu which > opens the optional documentation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12661) Ozone: Support optional documentation link in KSM/SCM webui
[ https://issues.apache.org/jira/browse/HDFS-12661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12661: Status: Patch Available (was: Open) > Ozone: Support optional documentation link in KSM/SCM webui > --- > > Key: HDFS-12661 > URL: https://issues.apache.org/jira/browse/HDFS-12661 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12661-HDFS-7240.001.patch > > > In some cases it could be useful to include additional documentation to the > SCM/KSM web ui. > This patch includes an optional hook. During the startup scm/ksm web ui do a > HTTP HEAD request and if docs/index.html exists, an additional Documentation > link will be displayed in the menu header. > Long term we can generate documentation automatically from the source. > Testing: > Do a full build, start scm: no link in the ui. > Add some optional documentation (choose one): > * hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/scm/docs/index.html > (before build) > * hadoop-hdfs-project/hadoop-hdfs/target/webapps/scm/docs/index.html (after > clean, before build) > * > hadoop-dist/target/hadoop-3.1.0-SNAPSHOT/share/hadoop/hdfs/webapps/scm/docs/index.html > (after build) > And start scm again. There should be a documentation link in the menu which > opens the optional documentation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12661) Ozone: Support optional documentation link in KSM/SCM webui
Elek, Marton created HDFS-12661: --- Summary: Ozone: Support optional documentation link in KSM/SCM webui Key: HDFS-12661 URL: https://issues.apache.org/jira/browse/HDFS-12661 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton In some cases it could be useful to include additional documentation to the SCM/KSM web ui. This patch includes an optional hook. During the startup scm/ksm web ui do a HTTP HEAD request and if docs/index.html exists, an additional Documentation link will be displayed in the menu header. Long term we can generate documentation automatically from the source. Testing: Do a full build, start scm: no link in the ui. Add some optional documentation (choose one): * hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/scm/docs/index.html (before build) * hadoop-hdfs-project/hadoop-hdfs/target/webapps/scm/docs/index.html (after clean, before build) * hadoop-dist/target/hadoop-3.1.0-SNAPSHOT/share/hadoop/hdfs/webapps/scm/docs/index.html (after build) And start scm again. There should be a documentation link in the menu which opens the optional documentation. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11902) [READ] Merge BlockFormatProvider and FileRegionProvider.
[ https://issues.apache.org/jira/browse/HDFS-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewan Higgs updated HDFS-11902: -- Attachment: HDFS-11902-HDFS-9806.010.patch Patch fixing findbugs issue and making {ImageWriter.Options}} methods use {{setBlocks}} style. > [READ] Merge BlockFormatProvider and FileRegionProvider. > > > Key: HDFS-11902 > URL: https://issues.apache.org/jira/browse/HDFS-11902 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti > Attachments: HDFS-11902-HDFS-9806.001.patch, > HDFS-11902-HDFS-9806.002.patch, HDFS-11902-HDFS-9806.003.patch, > HDFS-11902-HDFS-9806.004.patch, HDFS-11902-HDFS-9806.005.patch, > HDFS-11902-HDFS-9806.006.patch, HDFS-11902-HDFS-9806.007.patch, > HDFS-11902-HDFS-9806.008.patch, HDFS-11902-HDFS-9806.009.patch, > HDFS-11902-HDFS-9806.010.patch > > > Currently {{BlockFormatProvider}} and {{TextFileRegionProvider}} perform > almost the same function on the Namenode and Datanode respectively. This JIRA > is to merge them into one. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol
[ https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204025#comment-16204025 ] Hadoop QA commented on HDFS-12549: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 2s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 3s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 11m 47s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 47s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}231m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12549 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892086/HDFS-12549-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d7f31535b9c6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204020#comment-16204020 ] Xiao Chen commented on HDFS-12637: -- Thanks [~tasanuma0829] for the pointer and explanation. Makes sense to me. The test only took a few seconds, but I think the rule of using parameterized tests only when no minicluster is started sounds better. Change LGTM, pre-commit failures looks irrelevant. Retriggered a new run. > Extend TestDistributedFileSystemWithECFile with a random EC policy > -- > > Key: HDFS-12637 > URL: https://issues.apache.org/jira/browse/HDFS-12637 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12637.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12603) Enable async edit logging by default
[ https://issues.apache.org/jira/browse/HDFS-12603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204013#comment-16204013 ] Xiao Chen commented on HDFS-12603: -- +1 pending jenkins. Thanks Andrew. > Enable async edit logging by default > > > Key: HDFS-12603 > URL: https://issues.apache.org/jira/browse/HDFS-12603 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-12603.001.patch, HDFS-12603.002.patch, > HDFS-12603.003.patch, HDFS-12603.004.patch, HDFS-12603.branch-2.01.patch, > HDFS-12603.branch-2.02.patch, HDFS-12603.injectors-not-working.patch > > > HDFS-7964 added support for async edit logging. Based on further discussion > in that JIRA, we think it's safe to turn this on by default for better > out-of-the-box performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12659) Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat recheck interval
[ https://issues.apache.org/jira/browse/HDFS-12659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203998#comment-16203998 ] Xiao Chen commented on HDFS-12659: -- This was discovered while looking at HDFS-12578. +1 pending jenkins, thanks Ajay. > Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat > recheck interval > > > Key: HDFS-12659 > URL: https://issues.apache.org/jira/browse/HDFS-12659 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HDFS-12659.01.patch > > > Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to give a wider range > than existing 1 millisecond to avoid intermittent failures. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12603) Enable async edit logging by default
[ https://issues.apache.org/jira/browse/HDFS-12603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12603: --- Attachment: HDFS-12603.004.patch Here's a patch which undoes the toString additions from 003 and also comments out those two test cases with async edit logging on. > Enable async edit logging by default > > > Key: HDFS-12603 > URL: https://issues.apache.org/jira/browse/HDFS-12603 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-12603.001.patch, HDFS-12603.002.patch, > HDFS-12603.003.patch, HDFS-12603.004.patch, HDFS-12603.branch-2.01.patch, > HDFS-12603.branch-2.02.patch, HDFS-12603.injectors-not-working.patch > > > HDFS-7964 added support for async edit logging. Based on further discussion > in that JIRA, we think it's safe to turn this on by default for better > out-of-the-box performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12603) Enable async edit logging by default
[ https://issues.apache.org/jira/browse/HDFS-12603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12603: --- Status: Patch Available (was: Reopened) > Enable async edit logging by default > > > Key: HDFS-12603 > URL: https://issues.apache.org/jira/browse/HDFS-12603 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 3.0.0-alpha1, 2.8.0, 2.9.0 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-12603.001.patch, HDFS-12603.002.patch, > HDFS-12603.003.patch, HDFS-12603.004.patch, HDFS-12603.branch-2.01.patch, > HDFS-12603.branch-2.02.patch, HDFS-12603.injectors-not-working.patch > > > HDFS-7964 added support for async edit logging. Based on further discussion > in that JIRA, we think it's safe to turn this on by default for better > out-of-the-box performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12660) Enable async edit logging test cases in TestFailureToReadEdits
Andrew Wang created HDFS-12660: -- Summary: Enable async edit logging test cases in TestFailureToReadEdits Key: HDFS-12660 URL: https://issues.apache.org/jira/browse/HDFS-12660 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.9.0, 3.0.0 Reporter: Andrew Wang Per discussion in HDFS-12603, this test is failing occasionally due to mysterious mocking issues. Let's try and fix them in this issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12603) Enable async edit logging by default
[ https://issues.apache.org/jira/browse/HDFS-12603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203973#comment-16203973 ] Andrew Wang commented on HDFS-12603: I'm inclined at this point to defer enabling async logging for this test to future work. It seems to be failing erratically due to mocking errors. I'll file a follow-on JIRA and post a new patch with a TODO pointing at the follow-on. > Enable async edit logging by default > > > Key: HDFS-12603 > URL: https://issues.apache.org/jira/browse/HDFS-12603 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang > Attachments: HDFS-12603.001.patch, HDFS-12603.002.patch, > HDFS-12603.003.patch, HDFS-12603.branch-2.01.patch, > HDFS-12603.branch-2.02.patch, HDFS-12603.injectors-not-working.patch > > > HDFS-7964 added support for async edit logging. Based on further discussion > in that JIRA, we think it's safe to turn this on by default for better > out-of-the-box performance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12659) Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat recheck interval
[ https://issues.apache.org/jira/browse/HDFS-12659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12659: -- Status: Patch Available (was: Open) > Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat > recheck interval > > > Key: HDFS-12659 > URL: https://issues.apache.org/jira/browse/HDFS-12659 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HDFS-12659.01.patch > > > Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to give a wider range > than existing 1 millisecond to avoid intermittent failures. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12659) Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat recheck interval
[ https://issues.apache.org/jira/browse/HDFS-12659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12659: -- Attachment: HDFS-12659.01.patch > Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat > recheck interval > > > Key: HDFS-12659 > URL: https://issues.apache.org/jira/browse/HDFS-12659 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Minor > Attachments: HDFS-12659.01.patch > > > Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to give a wider range > than existing 1 millisecond to avoid intermittent failures. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-12411: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thanks all for the reviews. I've commit the patch to the feature branch. > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch, > HDFS-12411-HDFS-7240.006.patch, HDFS-12411-HDFS-7240.007.patch, > HDFS-12411-HDFS-7240.008.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203938#comment-16203938 ] Ajay Kumar commented on HDFS-12578: --- [~xiaochen], thanks for review. Filed [HDFS-12659] for tracking change in 2.8+. > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps
[ https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203925#comment-16203925 ] Wei Yan commented on HDFS-12502: LGTM. One minor format: {code} verify(rb, times(3)).addCounter(Interns.info("op=listStatus." + "user=test.count", "Total operations performed by user"), 3L); + +verify(rb, times(3)).addCounter(Interns.info("op=" + FILES_IN_GETLISTING + +".user=test.count", "Total operations performed by user"), 1000L); } {code} some extra spaces there in the last line change. > nntop should support a category based on FilesInGetListingOps > - > > Key: HDFS-12502 > URL: https://issues.apache.org/jira/browse/HDFS-12502 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch, > HDFS-12502.02.patch, HDFS-12502.03.patch > > > Large listing ops can oftentimes be the main contributor to NameNode > slowness. The aggregate cost of listing ops is proportional to the > {{FilesInGetListingOps}} rather than the number of listing ops. Therefore > it'd be very useful for nntop to support this category. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12641) Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445
[ https://issues.apache.org/jira/browse/HDFS-12641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-12641: --- Issue Type: Task (was: Bug) > Backport HDFS-11755 into branch-2.7 to fix a regression in HDFS-11445 > - > > Key: HDFS-12641 > URL: https://issues.apache.org/jira/browse/HDFS-12641 > Project: Hadoop HDFS > Issue Type: Task >Affects Versions: 2.7.4 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > > Our internal testing caught a regression in HDFS-11445 when we cherry picked > the commit into CDH. Basically, it produces bogus missing file warnings. > Further analysis revealed that the regression is actually fixed by HDFS-11755. > Because of the order commits are merged in branch-2.8 ~ trunk (HDFS-11755 was > committed before HDFS-11445), the regression was never actually surfaced for > Hadoop 2.8/3.0.0-(alpha/beta) users. Since branch-2.7 has HDFS-11445 but no > HDFS-11755, I suspect the regression is more visible for Hadoop 2.7.4. > I am filing this jira to raise more awareness, than simply backporting > HDFS-11755 into branch-2.7. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12613) Native EC coder should implement release() as idempotent function.
[ https://issues.apache.org/jira/browse/HDFS-12613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-12613: - Attachment: HDFS-12613.04.patch Thanks [~drankye] I updated the patch to address checkstyle warnings. And retrigger another jenkins build. > Native EC coder should implement release() as idempotent function. > -- > > Key: HDFS-12613 > URL: https://issues.apache.org/jira/browse/HDFS-12613 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.0.0-beta1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu > Attachments: HDFS-12613.00.patch, HDFS-12613.01.patch, > HDFS-12613.02.patch, HDFS-12613.03.patch, HDFS-12613.04.patch > > > Recently, we found native EC coder crashes JVM because > {{NativeRSDecoder#release()}} being called multiple times (HDFS-12612 and > HDFS-12606). > We should strength the implement the native code to make {{release()}} > idempotent as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12659) Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat recheck interval
Ajay Kumar created HDFS-12659: - Summary: Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to increase heartbeat recheck interval Key: HDFS-12659 URL: https://issues.apache.org/jira/browse/HDFS-12659 Project: Hadoop HDFS Issue Type: Bug Reporter: Ajay Kumar Assignee: Ajay Kumar Priority: Minor Update TestDeadDatanode#testNonDFSUsedONDeadNodeReReg to give a wider range than existing 1 millisecond to avoid intermittent failures. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12653) Implement toArray() and subArray() for ReadOnlyList
[ https://issues.apache.org/jira/browse/HDFS-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203897#comment-16203897 ] Daryn Sharp commented on HDFS-12653: I've often wondered why does {{ReadOnlyList}} even exist? If we keep making it act more like a standard collection, why not use a standard collection? > Implement toArray() and subArray() for ReadOnlyList > --- > > Key: HDFS-12653 > URL: https://issues.apache.org/jira/browse/HDFS-12653 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > > {{ReadOnlyList}} today gives an unmodifiable view of the backing List. This > list supports following Util methods for easy construction of read only views > of any given list. > {noformat} > public static ReadOnlyList asReadOnlyList(final List list) > public static List asList(final ReadOnlyList list) > {noformat} > {{asList}} above additionally overrides {{Object[] toArray()}} of the > {{java.util.List}} interface. Unlike the {{java.util.List}}, the above one > returns an array of Objects referring to the backing list and avoid any > copying of objects. Given that we have many usages of read only lists, > 1. Lets have a light-weight / shared-view {{toArray()}} implementation for > {{ReadOnlyList}} as well. > 2. Additionally, similar to {{java.util.List#subList(fromIndex, toIndex)}}, > lets have {{ReadOnlyList#subArray(fromIndex, toIndex)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11797) BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException when corrupt replicas are inconsistent
[ https://issues.apache.org/jira/browse/HDFS-11797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203894#comment-16203894 ] Wei-Chiu Chuang commented on HDFS-11797: For future reference, I can confirm an occurrence of this bug happened to a customer of us, and I was able to find the sequence of incidences leading to this bug, which is exactly what HDFS-11445 fixes. # A datanode was shutdown, making the replica stale. # NameNode detected the staleness, adding it to corruptReplicaMap. Because the replica was on a DataNode that was out of date, the replica was not invalidated. So the corruptReplicaMap had the replica, and blockMap had the replica as well. # The block was updated, causing the stale replica removed from blockMap. *but it was not removed from corruptReplicaMap* # a client calling getBlockLocations caused AIOOE because of the mismatch. {noformat} 2017-10-10 14:48:10,664 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent number of corrupt replicas for blk_1041920008_1133174794 blockMap has 0 but corrupt replicas map has 1 2017-10-10 14:48:10,665 WARN org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 10.103.4.11:56487 Call#5239908 Retry#0 java.lang.ArrayIndexOutOfBoundsException: 2 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:982) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:929) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1031) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2059) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:2008) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1921) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:572) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:89) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211) {noformat} > BlockManager#createLocatedBlocks() can throw ArrayIndexOutofBoundsException > when corrupt replicas are inconsistent > -- > > Key: HDFS-11797 > URL: https://issues.apache.org/jira/browse/HDFS-11797 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Kuhu Shukla >Assignee: Kuhu Shukla >Priority: Critical > Attachments: HDFS-11797.001.patch > > > The calculation for {{numMachines}} can be too less (causing > ArrayIndexOutOfBoundsException) or too many (causing NPE (HDFS-9958)) if data > structures find inconsistent number of corrupt replicas. This was earlier > found related to failed storages. This JIRA tracks a change that works for > all possible cases of inconsistencies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203872#comment-16203872 ] Xiao Chen commented on HDFS-12578: -- +1 on patch 2 pending jenkins. Thanks Ajay. bq. For 2.8+ shall i file a new jira? Sure, we can use this one to track the consistent failure in 2.7, and your new jira for possible intermittent failures. Strictly speaking the {{DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY}} bump for 2.7 should be in the new jira too, but IMO this is pretty minor so will just commit patch 2 to branch-2.7 once pre-commit comes back. > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12411) Ozone: Add container usage information to DN container report
[ https://issues.apache.org/jira/browse/HDFS-12411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203870#comment-16203870 ] Anu Engineer commented on HDFS-12411: - +1. [~xyao] Thanks for updating this patch. Let us discuss if we should have time-based or request based DN report when you get to the server side. > Ozone: Add container usage information to DN container report > - > > Key: HDFS-12411 > URL: https://issues.apache.org/jira/browse/HDFS-12411 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Labels: ozoneMerge > Attachments: HDFS-12411-HDFS-7240.001.patch, > HDFS-12411-HDFS-7240.002.patch, HDFS-12411-HDFS-7240.003.patch, > HDFS-12411-HDFS-7240.004.patch, HDFS-12411-HDFS-7240.005.patch, > HDFS-12411-HDFS-7240.006.patch, HDFS-12411-HDFS-7240.007.patch, > HDFS-12411-HDFS-7240.008.patch > > > Current DN ReportState for container only has a counter, we will need to > include individual container usage information so that SCM can > * close container when they are full > * assign container for block service with different policies. > * etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11590) Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or not in the cache
[ https://issues.apache.org/jira/browse/HDFS-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203855#comment-16203855 ] Daryn Sharp commented on HDFS-11590: Skimmed the patch, I think it probably looks ok, but the test is only proving renewer attempted to close the files. I'd like to see a test verify the client was unregistered from the renewer and doesn't call renew on it again – I haven't yet verified that happens. Likewise that other clients are not removed and continue to be renewed. I'd prefer the test be more precise by specifically triggering renewals and verifying the resulting behavior instead of waiting up to 5s. Timeouts are always problematic on very slow build nodes. > Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or > not in the cache > > > Key: HDFS-11590 > URL: https://issues.apache.org/jira/browse/HDFS-11590 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.6.0 > Environment: Releases: > cloudera release cdh-5.5.0 > openjdk version "1.8.0_91" > linux centos6 servers > Cluster info: > Namenode and resourcemanager in HA with kerberos authentication > More than 1300 datanodes/nodemanagers >Reporter: Nicolas Fraison >Priority: Minor > Attachments: HDFS-11590.001.patch, HDFS-11590.002.patch, > HDFS-11590.patch > > > We have faced some huge slowdowns on our namenode due to all our nodemanagers > continuing to retry to renew a lease and reconnecting to the namenode every > second during 1 hour due to some HDFS_DELEGATION_TOKEN being expired or not > in the cache. > The number of time_wait connection on our namenode was stuck to the maximum > configured of 250k during this period due to the reconnections each time. > {code} > 2017-03-02 11:51:42,817 INFO > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization successful for appattempt_1488396860014_156103_01 > (auth:TOKEN) for protocol=interface > org.apache.hadoop.yarn.api.ContainerManagementProtocolPB > 2017-03-02 11:51:43,414 INFO > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization successful for appattempt_1488396860014_156120_01 > (auth:TOKEN) for protocol=interface > org.apache.hadoop.yarn.api.ContainerManagementProtocolPB > 2017-03-02 11:51:51,994 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:prediction (auth:SIMPLE) > cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN org.apache.hadoop.ipc.Client: Exception > encountered while connecting to the server : > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:prediction (auth:SIMPLE) > cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to > renew lease for [DFSClient_NONMAPREDUCE_1560141256_4187204] for 30 seconds. > Will retry shortly ... > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > at org.apache.hadoop.ipc.Client.call(Client.java:1472) > at org.apache.hadoop.ipc.Client.call(Client.java:1403) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy20.renewLease(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571) > at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) > at com.sun.proxy.$Proxy21.renewLease(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:921) > at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423) > at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448) > at
[jira] [Commented] (HDFS-12614) FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider configured
[ https://issues.apache.org/jira/browse/HDFS-12614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203824#comment-16203824 ] Daryn Sharp commented on HDFS-12614: Haven't fully considered the provider interface but it currently enforces an inefficient call pattern. We can discuss over there. As for this patch, remove the interning. Constant strings are already instance equivalent. > FSPermissionChecker#getINodeAttrs() throws NPE when INodeAttributesProvider > configured > -- > > Key: HDFS-12614 > URL: https://issues.apache.org/jira/browse/HDFS-12614 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy > Attachments: HDFS-12614.01.patch, HDFS-12614.02.patch, > HDFS-12614.03.patch, HDFS-12614.test.01.patch > > > When INodeAttributesProvider is configured, and when resolving path (like > "/") and checking for permission, the following code when working on > {{pathByNameArr}} throws NullPointerException. > {noformat} > private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx, > INode inode, int snapshotId) { > INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId); > if (getAttributesProvider() != null) { > String[] elements = new String[pathIdx + 1]; > for (int i = 0; i < elements.length; i++) { > elements[i] = DFSUtil.bytes2String(pathByNameArr[i]); <=== > } > inodeAttrs = getAttributesProvider().getAttributes(elements, > inodeAttrs); > } > return inodeAttrs; > } > {noformat} > Looks like for paths like "/" where the split components based on delimiter > "/" can be null, the pathByNameArr array can have null elements and can throw > NPE. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203800#comment-16203800 ] Ajay Kumar edited comment on HDFS-12578 at 10/13/17 4:31 PM: - [~xiaochen], Ya, thanks for sharing details about related jira's. Updated patch with suggested change. For 2.8+ shall i file a new jira? was (Author: ajayydv): [~xiaochen], Updated patch with suggested change. For 2.8+ shall i file a new jira? > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203800#comment-16203800 ] Ajay Kumar commented on HDFS-12578: --- [~xiaochen], Updated patch with suggested change. For 2.8+ shall i file a new jira? > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12658) Lease renewal causes connection flapping
[ https://issues.apache.org/jira/browse/HDFS-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203796#comment-16203796 ] Daryn Sharp commented on HDFS-12658: The logic should be smarter. A simple improvement may be something hdfsTimeout/2-1000 to allow the client 1s to issue the renewal. That should prevent flapping in the majority of cases. > Lease renewal causes connection flapping > > > Key: HDFS-12658 > URL: https://issues.apache.org/jira/browse/HDFS-12658 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.0.0-alpha >Reporter: Daryn Sharp > > Adding a dfsclient to the lease renewer use the minimum of 1/2 the soft > timeout vs. 1/2 the client's timeout (when the client closes an idle > connection). Both default to 1m, so clients with open files that are > otherwise not making calls to the NN will experience connection flapping. > Re-authentication is unnecessarily taxing on the ipc layer. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12578: -- Attachment: HDFS-12578-branch-2.7.002.patch > TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7 > > > Key: HDFS-12578 > URL: https://issues.apache.org/jira/browse/HDFS-12578 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Xiao Chen >Assignee: Ajay Kumar >Priority: Blocker > Attachments: HDFS-12578-branch-2.7.001.patch, > HDFS-12578-branch-2.7.002.patch > > > It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently > failing in branch-2.7. We should investigate and fix it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12658) Lease renewal causes connection flapping
Daryn Sharp created HDFS-12658: -- Summary: Lease renewal causes connection flapping Key: HDFS-12658 URL: https://issues.apache.org/jira/browse/HDFS-12658 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.0.0-alpha Reporter: Daryn Sharp Adding a dfsclient to the lease renewer use the minimum of 1/2 the soft timeout vs. 1/2 the client's timeout (when the client closes an idle connection). Both default to 1m, so clients with open files that are otherwise not making calls to the NN will experience connection flapping. Re-authentication is unnecessarily taxing on the ipc layer. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12656) Ozone: dozone: Use (proposed) base image from HADOOP-14898
[ https://issues.apache.org/jira/browse/HDFS-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203671#comment-16203671 ] Hadoop QA commented on HDFS-12656: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 1s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12656 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892077/HDFS-12656-HDFS-7240.001.patch | | Optional Tests | asflicense shellcheck shelldocs | | uname | Linux 8549c1787479 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 1968333 | | shellcheck | v0.4.6 | | modules | C: U: | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21684/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: dozone: Use (proposed) base image from HADOOP-14898 > -- > > Key: HDFS-12656 > URL: https://issues.apache.org/jira/browse/HDFS-12656 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12656-HDFS-7240.001.patch > > > The original docker-compose definition of the dockerized ozon cluster uses a > more complex base image (flokkr/hadoop-runner) from the flokkr project > (github.com/flokkr/flokkr) > This patch is to replace this image with a simplified version, which also > includes the source of the script which converts the environment variables to > hadoop XML format. > The simplified version is exactly the same which is proposed to be used as > the baseimage of HADOOP-14898. The source is available from the > HADOOP-14898 issue and the image is uploaded to the dockerhub > (https://hub.docker.com/r/elek/hadoop-runner) > As it is the proposed base image for the official hadoop images, it will be > easier to switch to apache/hadop-runner (when it will be merged). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol
[ https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDFS-12549: --- Attachment: HDFS-12549-HDFS-7240.002.patch Patch v002 after rebase. > Ozone: OzoneClient: Support for REST protocol > - > > Key: HDFS-12549 > URL: https://issues.apache.org/jira/browse/HDFS-12549 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar > Attachments: HDFS-12549-HDFS-7240.000.patch, > HDFS-12549-HDFS-7240.001.patch, HDFS-12549-HDFS-7240.002.patch > > > Support for REST protocol in OzoneClient. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12657) Operations based on inode id must not fallback to the path
Daryn Sharp created HDFS-12657: -- Summary: Operations based on inode id must not fallback to the path Key: HDFS-12657 URL: https://issues.apache.org/jira/browse/HDFS-12657 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.5.0 Reporter: Daryn Sharp HDFS-6294 added the ability for some path-based operations to specify an optional inode id to mimic file descriptors. If an inode id is provided and it exists, it replaces the provided path. If it doesn't exist, it has the broken behavior of falling back to the supplied path. A supplied inode id must be authoritative. A FNF should be thrown if the inode does not exist. (HDFS-10745 changed from string paths to IIPs but preserved the same broken semantics) This is broken since an operation specifying an inode for a deleted and recreated path will operate on the newer inode. If another client recreates the path, the operation is likely to fail for other reasons such as lease checks. However a multi-threaded client has a single lease id. If thread1 creates a file, it's somehow deleted, thread2 recreates the path, then further operations in thread1 may conflict with thread2 and corrupt the state of the file. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12656) Ozone: dozone: Use (proposed) base image from HADOOP-14898
[ https://issues.apache.org/jira/browse/HDFS-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203607#comment-16203607 ] Elek, Marton edited comment on HDFS-12656 at 10/13/17 2:06 PM: --- Patch is uploaded. To test use: {code} docker pull elek/hadoop-runner cd dev-tools/compose/ozone docker-compose up {code} One typo is also fixed in the configuration. was (Author: elek): Patch is uploaded. To test use: ``` docker pull elek/hadoop-runner cd dev-tools/compose/ozone docker-compose up ``` > Ozone: dozone: Use (proposed) base image from HADOOP-14898 > -- > > Key: HDFS-12656 > URL: https://issues.apache.org/jira/browse/HDFS-12656 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12656-HDFS-7240.001.patch > > > The original docker-compose definition of the dockerized ozon cluster uses a > more complex base image (flokkr/hadoop-runner) from the flokkr project > (github.com/flokkr/flokkr) > This patch is to replace this image with a simplified version, which also > includes the source of the script which converts the environment variables to > hadoop XML format. > The simplified version is exactly the same which is proposed to be used as > the baseimage of HADOOP-14898. The source is available from the > HADOOP-14898 issue and the image is uploaded to the dockerhub > (https://hub.docker.com/r/elek/hadoop-runner) > As it is the proposed base image for the official hadoop images, it will be > easier to switch to apache/hadop-runner (when it will be merged). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12656) Ozone: dozone: Use (proposed) base image from HADOOP-14898
[ https://issues.apache.org/jira/browse/HDFS-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12656: Status: Patch Available (was: Open) > Ozone: dozone: Use (proposed) base image from HADOOP-14898 > -- > > Key: HDFS-12656 > URL: https://issues.apache.org/jira/browse/HDFS-12656 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12656-HDFS-7240.001.patch > > > The original docker-compose definition of the dockerized ozon cluster uses a > more complex base image (flokkr/hadoop-runner) from the flokkr project > (github.com/flokkr/flokkr) > This patch is to replace this image with a simplified version, which also > includes the source of the script which converts the environment variables to > hadoop XML format. > The simplified version is exactly the same which is proposed to be used as > the baseimage of HADOOP-14898. The source is available from the > HADOOP-14898 issue and the image is uploaded to the dockerhub > (https://hub.docker.com/r/elek/hadoop-runner) > As it is the proposed base image for the official hadoop images, it will be > easier to switch to apache/hadop-runner (when it will be merged). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12656) Ozone: dozone: Use (proposed) base image from HADOOP-14898
[ https://issues.apache.org/jira/browse/HDFS-12656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12656: Attachment: HDFS-12656-HDFS-7240.001.patch Patch is uploaded. To test use: ``` docker pull elek/hadoop-runner cd dev-tools/compose/ozone docker-compose up ``` > Ozone: dozone: Use (proposed) base image from HADOOP-14898 > -- > > Key: HDFS-12656 > URL: https://issues.apache.org/jira/browse/HDFS-12656 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12656-HDFS-7240.001.patch > > > The original docker-compose definition of the dockerized ozon cluster uses a > more complex base image (flokkr/hadoop-runner) from the flokkr project > (github.com/flokkr/flokkr) > This patch is to replace this image with a simplified version, which also > includes the source of the script which converts the environment variables to > hadoop XML format. > The simplified version is exactly the same which is proposed to be used as > the baseimage of HADOOP-14898. The source is available from the > HADOOP-14898 issue and the image is uploaded to the dockerhub > (https://hub.docker.com/r/elek/hadoop-runner) > As it is the proposed base image for the official hadoop images, it will be > easier to switch to apache/hadop-runner (when it will be merged). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12656) Ozone: dozone: Use (proposed) base image from HADOOP-14898
Elek, Marton created HDFS-12656: --- Summary: Ozone: dozone: Use (proposed) base image from HADOOP-14898 Key: HDFS-12656 URL: https://issues.apache.org/jira/browse/HDFS-12656 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton The original docker-compose definition of the dockerized ozon cluster uses a more complex base image (flokkr/hadoop-runner) from the flokkr project (github.com/flokkr/flokkr) This patch is to replace this image with a simplified version, which also includes the source of the script which converts the environment variables to hadoop XML format. The simplified version is exactly the same which is proposed to be used as the baseimage of HADOOP-14898. The source is available from the HADOOP-14898 issue and the image is uploaded to the dockerhub (https://hub.docker.com/r/elek/hadoop-runner) As it is the proposed base image for the official hadoop images, it will be easier to switch to apache/hadop-runner (when it will be merged). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203585#comment-16203585 ] Daryn Sharp commented on HDFS-12638: [~shv], you worked on truncate, any further insights? > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203577#comment-16203577 ] Daryn Sharp commented on HDFS-12638: There's 2 likely scenarios: * The block is added to the blocks map with an unlinked inode not in the inode map. The only way to add a block to the map is via {{BlockManager#addBlockCollection}}. Truncate, like add block, does this but it should not have been able to resolve the inode if it's unlinked and I don't immediately see locking issues. * The lease manager encountered an error and removed the inode but left the blocks intact. Look for log warn of "Removing lease with an invalid path" because the lease manager catches IOEs and just removes the inode anyway which can leave the blocks map in an inconsistent state. Very bad. > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12655) Ozone: use specific names for RPC.Server instances
[ https://issues.apache.org/jira/browse/HDFS-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203568#comment-16203568 ] Hadoop QA commented on HDFS-12655: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 20s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 22s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 491 unchanged - 3 fixed = 491 total (was 494) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 11s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 86m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12655 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892049/HDFS-12655-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8e13aa08773a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 1968333 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21683/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21683/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21683/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT
[jira] [Commented] (HDFS-12556) [SPS] : Block movement analysis should be done in read lock.
[ https://issues.apache.org/jira/browse/HDFS-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203550#comment-16203550 ] Hadoop QA commented on HDFS-12556: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 9s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} HDFS-10285 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 13m 19s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 48s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 23s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}170m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | |
[jira] [Commented] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203549#comment-16203549 ] Hadoop QA commented on HDFS-12637: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 48s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 139 new + 255 unchanged - 139 fixed = 394 total (was 394) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 23s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}164m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0de40f0 | | JIRA Issue | HDFS-12637 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12892030/HDFS-12637.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9ee8857698cd 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f4fb669 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21682/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21682/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21682/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Comment Edited] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203501#comment-16203501 ] Jiandan Yang edited comment on HDFS-12638 at 10/13/17 12:39 PM: - I find another block with the same problem, and the auditlogs about the file to which the block blongs are as following. {code:java} 2017-10-13 03:26:59,198 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=create src=/user/admin/xx dst=nullperm=admin:hadoop:rw-r--r-- proto=rpc 2017-10-13 03:26:59,290 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=truncatesrc=/user/admin/xx dst=nullperm=admin:hadoop:rw-r--r-- proto=rpc 2017-10-13 03:26:59,293 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=delete src=/user/admin/xx dst=nullperm=null proto=rpc {code} was (Author: yangjiandan): I find another block with the same problem, and the auditlogs about the file to which the block blongs are as following. {code:java} 2017-10-13 03:26:59,198 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=create src=/user/admin/ba13ed3a-898d-4b72-a873-1999da5f0a70dst=null perm=admin:hadoop:rw-r--r-- proto=rpc 2017-10-13 03:26:59,290 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=truncate src=/user/admin/ba13ed3a-898d-4b72-a873-1999da5f0a70dst=null perm=admin:hadoop:rw-r--r-- proto=rpc 2017-10-13 03:26:59,293 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=delete src=/user/admin/ba13ed3a-898d-4b72-a873-1999da5f0a70dst=null perm=null proto=rpc {code} > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203501#comment-16203501 ] Jiandan Yang commented on HDFS-12638: -- I find another block with the same problem, and the auditlogs about the file to which the block blongs are as following. {code:java} 2017-10-13 03:26:59,198 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=create src=/user/admin/ba13ed3a-898d-4b72-a873-1999da5f0a70dst=null perm=admin:hadoop:rw-r--r-- proto=rpc 2017-10-13 03:26:59,290 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=truncate src=/user/admin/ba13ed3a-898d-4b72-a873-1999da5f0a70dst=null perm=admin:hadoop:rw-r--r-- proto=rpc 2017-10-13 03:26:59,293 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:SIMPLE) ip=/xx.xxx.xx.xxx cmd=delete src=/user/admin/ba13ed3a-898d-4b72-a873-1999da5f0a70dst=null perm=null proto=rpc {code} > NameNode exits due to ReplicationMonitor thread received Runtime exception in > ReplicationWork#chooseTargets > --- > > Key: HDFS-12638 > URL: https://issues.apache.org/jira/browse/HDFS-12638 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.8.2 >Reporter: Jiandan Yang > > Active NamNode exit due to NPE, I can confirm that the BlockCollection passed > in when creating ReplicationWork is null, but I do not know why > BlockCollection is null, By view history I found > [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging > whether BlockCollection is null. > NN logs are as following: > {code:java} > 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > ReplicationMonitor thread received Runtime exception. > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744) > at java.lang.Thread.run(Thread.java:834) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12655) Ozone: use specific names for RPC.Server instances
[ https://issues.apache.org/jira/browse/HDFS-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12655: Status: Patch Available (was: Open) > Ozone: use specific names for RPC.Server instances > -- > > Key: HDFS-12655 > URL: https://issues.apache.org/jira/browse/HDFS-12655 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12655-HDFS-7240.001.patch, rpcmetrics.png > > > My original motivation is using meaningful names on the scm web ui. As you > can see on the attached screenshot we display the metrics for all the RPC > servers (using hadoop metrics published on jmx). Unfortunatelly we can > display only the port number on the web ui as there are no meaningfull names > published on the jmx interface. > After some investigation I found that there is a serverName constructor > parameter int Rpc.Server but it is NOT used currently. > This patch will: > 1. store the serverName in field of RPC.Server a > 2. Improve the way how the serverName is calculated from the protocol class > names (it's typically an anonnymous inner class, so I remove the unnecessary > $-s from the name.) > 3. Add a new tag to the RpcMetrics based on the serverName field of the > Rpc.Server It will be displayed over JMX > 4. Add unit tests for checking the tag values and the default > classname->servername mapping. > ps: I need it for Ozone SCM web ui, but let ne know if it should be moved to > HADOOP project/trunk. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12655) Ozone: use specific names for RPC.Server instances
[ https://issues.apache.org/jira/browse/HDFS-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12655: Description: My original motivation is using meaningful names on the scm web ui. As you can see on the attached screenshot we display the metrics for all the RPC servers (using hadoop metrics published on jmx). Unfortunatelly we can display only the port number on the web ui as there are no meaningfull names published on the jmx interface. After some investigation I found that there is a serverName constructor parameter int Rpc.Server but it is NOT used currently. This patch will: 1. store the serverName in field of RPC.Server a 2. Improve the way how the serverName is calculated from the protocol class names (it's typically an anonnymous inner class, so I remove the unnecessary $-s from the name.) 3. Add a new tag to the RpcMetrics based on the serverName field of the Rpc.Server It will be displayed over JMX 4. Add unit tests for checking the tag values and the default classname->servername mapping. ps: I need it for Ozone SCM web ui, but let ne know if it should be moved to HADOOP project/trunk. was: My original motivation is using meaningful names on the scm web ui. As you can see on the attached screenshot we display the metrics for all the RPC servers (using hadoop metrics published on jmx). Unfortunatelly we can display only the port number on the web ui as there are no meaningfull names published on the jmx interface. After some investigation I found that there is a serverName constructor parameter for but it is NOT used currently. This patch will: 1. store the serverName in field of RPC.Server a 2. Improve the way how the serverName is calculated from the protocol class names (it's typically an anonnymous inner class, so I remove the unnecessary $-s from the name.) 3. Add a new tag to the RpcMetrics (serverName) based on the serverName field of the Rpc.Server It will be displayed over JMX 4. Add unit tests for checking the tag values and the default classname->servername mapping. ps: I need it for Ozone SCM web ui, but let ne know if it should be moved to HADOOP project/trunk. > Ozone: use specific names for RPC.Server instances > -- > > Key: HDFS-12655 > URL: https://issues.apache.org/jira/browse/HDFS-12655 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12655-HDFS-7240.001.patch, rpcmetrics.png > > > My original motivation is using meaningful names on the scm web ui. As you > can see on the attached screenshot we display the metrics for all the RPC > servers (using hadoop metrics published on jmx). Unfortunatelly we can > display only the port number on the web ui as there are no meaningfull names > published on the jmx interface. > After some investigation I found that there is a serverName constructor > parameter int Rpc.Server but it is NOT used currently. > This patch will: > 1. store the serverName in field of RPC.Server a > 2. Improve the way how the serverName is calculated from the protocol class > names (it's typically an anonnymous inner class, so I remove the unnecessary > $-s from the name.) > 3. Add a new tag to the RpcMetrics based on the serverName field of the > Rpc.Server It will be displayed over JMX > 4. Add unit tests for checking the tag values and the default > classname->servername mapping. > ps: I need it for Ozone SCM web ui, but let ne know if it should be moved to > HADOOP project/trunk. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12655) Ozone: use specific names for RPC.Server instances
[ https://issues.apache.org/jira/browse/HDFS-12655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12655: Attachment: HDFS-12655-HDFS-7240.001.patch rpcmetrics.png > Ozone: use specific names for RPC.Server instances > -- > > Key: HDFS-12655 > URL: https://issues.apache.org/jira/browse/HDFS-12655 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12655-HDFS-7240.001.patch, rpcmetrics.png > > > My original motivation is using meaningful names on the scm web ui. As you > can see on the attached screenshot we display the metrics for all the RPC > servers (using hadoop metrics published on jmx). Unfortunatelly we can > display only the port number on the web ui as there are no meaningfull names > published on the jmx interface. > After some investigation I found that there is a serverName constructor > parameter for but it is NOT used currently. > This patch will: > 1. store the serverName in field of RPC.Server a > 2. Improve the way how the serverName is calculated from the protocol class > names (it's typically an anonnymous inner class, so I remove the unnecessary > $-s from the name.) > 3. Add a new tag to the RpcMetrics (serverName) based on the serverName field > of the Rpc.Server It will be displayed over JMX > 4. Add unit tests for checking the tag values and the default > classname->servername mapping. > ps: I need it for Ozone SCM web ui, but let ne know if it should be moved to > HADOOP project/trunk. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12655) Ozone: use specific names for RPC.Server instances
Elek, Marton created HDFS-12655: --- Summary: Ozone: use specific names for RPC.Server instances Key: HDFS-12655 URL: https://issues.apache.org/jira/browse/HDFS-12655 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Elek, Marton Assignee: Elek, Marton My original motivation is using meaningful names on the scm web ui. As you can see on the attached screenshot we display the metrics for all the RPC servers (using hadoop metrics published on jmx). Unfortunatelly we can display only the port number on the web ui as there are no meaningfull names published on the jmx interface. After some investigation I found that there is a serverName constructor parameter for but it is NOT used currently. This patch will: 1. store the serverName in field of RPC.Server a 2. Improve the way how the serverName is calculated from the protocol class names (it's typically an anonnymous inner class, so I remove the unnecessary $-s from the name.) 3. Add a new tag to the RpcMetrics (serverName) based on the serverName field of the Rpc.Server It will be displayed over JMX 4. Add unit tests for checking the tag values and the default classname->servername mapping. ps: I need it for Ozone SCM web ui, but let ne know if it should be moved to HADOOP project/trunk. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12654) APPEND API call is different in HTTPFS and NameNode REST
Andras Czesznak created HDFS-12654: -- Summary: APPEND API call is different in HTTPFS and NameNode REST Key: HDFS-12654 URL: https://issues.apache.org/jira/browse/HDFS-12654 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs, httpfs, namenode Affects Versions: 3.0.0-beta1, 2.8.0, 2.7.0, 2.6.0 Reporter: Andras Czesznak The APPEND REST API call behaves differently in the NameNode REST and the HTTPFS codes. The NameNode version creates the target file the new data being appended to if it does not exist at the time of the call issued. The HTTPFS version assumes the target file exists when APPEND is called and can append only the new data but does not create the target file it doesn't exist. The two implementations should be standardized, preferably the HTTPFS version should be modified to execute an implicit CREATE if the target file does not exist. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11467) Support ErasureCoding section in OIV XML/ReverseXML
[ https://issues.apache.org/jira/browse/HDFS-11467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203376#comment-16203376 ] Wei-Chiu Chuang commented on HDFS-11467: Hi [~HuafengWang] thanks for initiating this work! Would you please also contribute tests for the new OIV sections? > Support ErasureCoding section in OIV XML/ReverseXML > --- > > Key: HDFS-11467 > URL: https://issues.apache.org/jira/browse/HDFS-11467 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: tools >Affects Versions: 3.0.0-alpha4 >Reporter: Wei-Chiu Chuang >Assignee: Huafeng Wang > Labels: hdfs-ec-3.0-must-do > Attachments: HDFS-11467.001.patch > > > As discussed in HDFS-7859, after ErasureCoding section is added into fsimage, > we would like to also support exporting this section into an XML back and > forth using the OIV tool. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203349#comment-16203349 ] Takanobu Asanuma commented on HDFS-12637: - Thanks for the comment, [~xiaochen]. Yes, that's right. But the random ec policy test is needed for a few reasons which are discussed in HDFS-7866 and HDFS-9962. I would like to quote Andrew's comment from HDFS-9962 that I agree. bq. We randomized the policy tests to reduce the runtime of the test suite, under the belief that there wasn't much incremental benefit from running all of them every time. In summary, I'm working on this task following below: * Default policy (RS-6-3) always gets tested. * Create a random ec policy test if the test is long-running (e.g. using minicluster many times). * If the test is short-running, all policies test with parametrizing would be better. > Extend TestDistributedFileSystemWithECFile with a random EC policy > -- > > Key: HDFS-12637 > URL: https://issues.apache.org/jira/browse/HDFS-12637 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12637.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12637: Attachment: HDFS-12637.1.patch Resubmitted the 1st patch since Jenkins didn't run. > Extend TestDistributedFileSystemWithECFile with a random EC policy > -- > > Key: HDFS-12637 > URL: https://issues.apache.org/jira/browse/HDFS-12637 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > Attachments: HDFS-12637.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12637) Extend TestDistributedFileSystemWithECFile with a random EC policy
[ https://issues.apache.org/jira/browse/HDFS-12637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-12637: Attachment: (was: HDFS-12637.1.patch) > Extend TestDistributedFileSystemWithECFile with a random EC policy > -- > > Key: HDFS-12637 > URL: https://issues.apache.org/jira/browse/HDFS-12637 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: erasure-coding, test >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12556) [SPS] : Block movement analysis should be done in read lock.
[ https://issues.apache.org/jira/browse/HDFS-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203340#comment-16203340 ] Hadoop QA commented on HDFS-12556: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 1s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 30s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} HDFS-10285 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 36s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 44s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}209m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 | | | hadoop.hdfs.server.datanode.TestDataNodeUUID | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSClientSocketSize | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | |
[jira] [Comment Edited] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets
[ https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201540#comment-16201540 ] Jiandan Yang edited comment on HDFS-12638 at 10/13/17 9:48 AM: datanode revover failed because new blocksize is Long.MAX {code:java} 2017-10-09 19:19:17,054 INFO [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@437346ab] org.apache.hadoop.hdfs.server.datanode.DataNode: NameNode at nn_hostname/xx.xxx.xx.xxx:8020 calls recoverBlock(BP-1721125339-xx.xxx.xx.xxx-1505883414013:blk_1084203820_11907141, targets=[DatanodeInfoWithStorage[xx.xxx.xx.aaa:50010,null,null], DatanodeInfoWithStorage[xx.xxx.xx.bbb:50010,null,null], DatanodeInfoWithStorage[xx.xxx.xx.ccc:50010,null,null]], newGenerationStamp=11907145, newBlock=blk_1084203824_11907145) 2017-10-09 19:19:17,055 INFO [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@437346ab] org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: initReplicaRecovery: blk_1084203820_11907141, recoveryId=11907145, replica=FinalizedReplica, blk_1084203820_11907141, FINALIZED getNumBytes() = 7 getBytesOnDisk() = 7 getVisibleLength()= 7 getVolume() = /dump/10/dfs/data/current getBlockFile()= /dump/10/dfs/data/current/BP-1721125339-xx.xxx.xx.xxx-1505883414013/current/finalized/subdir31/subdir3/blk_1084203820 2017-10-09 19:19:17,055 INFO [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@437346ab] org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: initReplicaRecovery: changing replica state for blk_1084203820_11907141 from FINALIZED to RUR 2017-10-09 19:19:17,058 WARN [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@437346ab] org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to updateBlock (newblock=BP-1721125339-xx.xxx.xx.xxx-1505883414013:blk_1084203824_11907145, datanode=DatanodeInfoWithStorage[xx.xxx.xx.aaa:50010,null,null]) org.apache.hadoop.ipc.RemoteException(java.io.IOException): rur.getNumBytes() < newlength = 9223372036854775807, rur=ReplicaUnderRecovery, blk_1084203820_11907141, RUR getNumBytes() = 7 getBytesOnDisk() = 7 getVisibleLength()= 7 getVolume() = /dump/9/dfs/data/current getBlockFile()= /dump/9/dfs/data/current/BP-1721125339-xx.xxx.xx.xxx-1505883414013/current/finalized/subdir31/subdir3/blk_1084203820 recoveryId=11907145 original=FinalizedReplica, blk_1084203820_11907141, FINALIZED getNumBytes() = 7 getBytesOnDisk() = 7 getVisibleLength()= 7 getVolume() = /dump/9/dfs/data/current getBlockFile()= /dump/9/dfs/data/current/BP-1721125339-xx.xxx.xx.xxx-1505883414013/current/finalized/subdir31/subdir3/blk_1084203820 at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.updateReplicaUnderRecovery(FsDatasetImpl.java:2736) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.updateReplicaUnderRecovery(FsDatasetImpl.java:2678) at org.apache.hadoop.hdfs.server.datanode.DataNode.updateReplicaUnderRecovery(DataNode.java:2776) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.updateReplicaUnderRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:78) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3107) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:846) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:789) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2457) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483) at org.apache.hadoop.ipc.Client.call(Client.java:1429) at org.apache.hadoop.ipc.Client.call(Client.java:1339) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy22.updateReplicaUnderRecovery(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolTranslatorPB.updateReplicaUnderRecovery(InterDatanodeProtocolTranslatorPB.java:112) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$BlockRecord.updateReplicaUnderRecovery(BlockRecoveryWorker.java:77) at
[jira] [Updated] (HDFS-12555) HDFS federation should support configure secondary directory
[ https://issues.apache.org/jira/browse/HDFS-12555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] luoge123 updated HDFS-12555: Attachment: HDFS-12555.002.patch > HDFS federation should support configure secondary directory > - > > Key: HDFS-12555 > URL: https://issues.apache.org/jira/browse/HDFS-12555 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation > Environment: 2.6.0-cdh5.10.0 >Reporter: luoge123 > Fix For: 2.6.0 > > Attachments: HDFS-12555.001.patch, HDFS-12555.002.patch > > > HDFS federation support multiple namenodes horizontally scales the file > system namespace. As the amount of data grows, using a single group of > namenodes to manage a single directory, namenode still achieves performance > bottlenecks. In order to reduce the pressure of namenode, we can split out > the secondary directory, and manager it by a new namenode. This is > transparent for users. > For example, nn1 only manager the /user directory, when nn1 achieve > performance bottlenecks, we can split out /user/hive directory, and ues nn2 > to manager it. > That means core-site.xml should support as follows configuration. > >fs.viewfs.mounttable.nsX.link./user >hdfs://nn1:8020/user > > >fs.viewfs.mounttable.nsX.link./user/hive >hdfs://nn2:8020/user/hive > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12558) Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM web ui
[ https://issues.apache.org/jira/browse/HDFS-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203195#comment-16203195 ] Hadoop QA commented on HDFS-12558: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 28s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 28m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12558 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12891995/HDFS-12558-HDFS-7240.002.patch | | Optional Tests | asflicense shadedclient | | uname | Linux 3f3703eaf5cc 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 1968333 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21680/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM > web ui > - > > Key: HDFS-12558 > URL: https://issues.apache.org/jira/browse/HDFS-12558 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12558-HDFS-7240.001.patch, > HDFS-12558-HDFS-7240.002.patch, after.png, before.png > > > In Ozone (SCM/KSM) web ui we have additional visualization if > rpc.metrics.percentiles.intervals are enabled. > But according to the feedbacks it's a little bit confusing what is it exactly. > I would like to improve it and clarify how does it work. > 1. I will to add a footnote about these are not rolling windows but just > display of the last fixed window. > 2. I would like to rearrange the layout. As the different windows are > independent, I would show them in different lines and group by the intervals > and not by RpcQueueTime/RpcProcessingTime. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11590) Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or not in the cache
[ https://issues.apache.org/jira/browse/HDFS-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203189#comment-16203189 ] Nicolas Fraison commented on HDFS-11590: [~daryn] let me know if it's fine now. thks > Nodemanagers have DDoS our namenode due to HDFS_DELEGATION_TOKEN expired or > not in the cache > > > Key: HDFS-11590 > URL: https://issues.apache.org/jira/browse/HDFS-11590 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.6.0 > Environment: Releases: > cloudera release cdh-5.5.0 > openjdk version "1.8.0_91" > linux centos6 servers > Cluster info: > Namenode and resourcemanager in HA with kerberos authentication > More than 1300 datanodes/nodemanagers >Reporter: Nicolas Fraison >Priority: Minor > Attachments: HDFS-11590.001.patch, HDFS-11590.002.patch, > HDFS-11590.patch > > > We have faced some huge slowdowns on our namenode due to all our nodemanagers > continuing to retry to renew a lease and reconnecting to the namenode every > second during 1 hour due to some HDFS_DELEGATION_TOKEN being expired or not > in the cache. > The number of time_wait connection on our namenode was stuck to the maximum > configured of 250k during this period due to the reconnections each time. > {code} > 2017-03-02 11:51:42,817 INFO > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization successful for appattempt_1488396860014_156103_01 > (auth:TOKEN) for protocol=interface > org.apache.hadoop.yarn.api.ContainerManagementProtocolPB > 2017-03-02 11:51:43,414 INFO > SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: > Authorization successful for appattempt_1488396860014_156120_01 > (auth:TOKEN) for protocol=interface > org.apache.hadoop.yarn.api.ContainerManagementProtocolPB > 2017-03-02 11:51:51,994 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:prediction (auth:SIMPLE) > cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN org.apache.hadoop.ipc.Client: Exception > encountered while connecting to the server : > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:prediction (auth:SIMPLE) > cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > 2017-03-02 11:51:51,995 WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to > renew lease for [DFSClient_NONMAPREDUCE_1560141256_4187204] for 30 seconds. > Will retry shortly ... > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) is expired > at org.apache.hadoop.ipc.Client.call(Client.java:1472) > at org.apache.hadoop.ipc.Client.call(Client.java:1403) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy20.renewLease(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571) > at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) > at com.sun.proxy.$Proxy21.renewLease(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:921) > at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423) > at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448) > at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71) > at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304) > at java.lang.Thread.run(Thread.java:745) > 2017-03-02 12:51:22,032 WARN > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:prediction (auth:SIMPLE) > cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken): > token (HDFS_DELEGATION_TOKEN token 111018676 for prediction) can't be found > in
[jira] [Updated] (HDFS-12558) Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM web ui
[ https://issues.apache.org/jira/browse/HDFS-12558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HDFS-12558: Attachment: HDFS-12558-HDFS-7240.002.patch Re submitting for Jenkins. Same patch. > Ozone: Clarify the meaning of rpc.metrics.percentiles.intervals on KSM/SCM > web ui > - > > Key: HDFS-12558 > URL: https://issues.apache.org/jira/browse/HDFS-12558 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HDFS-12558-HDFS-7240.001.patch, > HDFS-12558-HDFS-7240.002.patch, after.png, before.png > > > In Ozone (SCM/KSM) web ui we have additional visualization if > rpc.metrics.percentiles.intervals are enabled. > But according to the feedbacks it's a little bit confusing what is it exactly. > I would like to improve it and clarify how does it work. > 1. I will to add a footnote about these are not rolling windows but just > display of the last fixed window. > 2. I would like to rearrange the layout. As the different windows are > independent, I would show them in different lines and group by the intervals > and not by RpcQueueTime/RpcProcessingTime. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12415) Ozone: TestXceiverClientManager and TestAllocateContainer occasionally fails
[ https://issues.apache.org/jira/browse/HDFS-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12415: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thanks [~vagarychen] and [~cheersyang] for the reviews. I have committed this to HDFS-7240 branch. > Ozone: TestXceiverClientManager and TestAllocateContainer occasionally fails > > > Key: HDFS-12415 > URL: https://issues.apache.org/jira/browse/HDFS-12415 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-7240 >Reporter: Weiwei Yang >Assignee: Mukul Kumar Singh > Fix For: HDFS-7240 > > Attachments: HDFS-12415-HDFS-7240.001.patch, > HDFS-12415-HDFS-7240.002.patch, HDFS-12415-HDFS-7240.003.patch, > HDFS-12415-HDFS-7240.004.patch, HDFS-12415-HDFS-7240.005.patch > > > TestXceiverClientManager seems to be occasionally failing in some jenkins > jobs, > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.ozone.scm.node.SCMNodeManager.getNodeStat(SCMNodeManager.java:828) > at > org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.hasEnoughSpace(SCMCommonPolicy.java:147) > at > org.apache.hadoop.ozone.scm.container.placement.algorithms.SCMCommonPolicy.lambda$chooseDatanodes$0(SCMCommonPolicy.java:125) > {noformat} > see more from [this > report|https://builds.apache.org/job/PreCommit-HDFS-Build/21065/testReport/] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org