[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS
[ https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602463#comment-14602463 ] Zhe Zhang commented on HDFS-7285: - Thanks Walter! Will address in the next rev. Erasure Coding Support inside HDFS -- Key: HDFS-7285 URL: https://issues.apache.org/jira/browse/HDFS-7285 Project: Hadoop HDFS Issue Type: New Feature Reporter: Weihua Jiang Assignee: Zhe Zhang Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, HDFS-EC-Merge-PoC-20150624.patch, HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, fsimage-analysis-20150105.pdf Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice of data reliability, comparing to the existing HDFS 3-replica approach. For example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, with storage overhead only being 40%. This makes EC a quite attractive alternative for big data storage, particularly for cold data. Facebook had a related open source project called HDFS-RAID. It used to be one of the contribute packages in HDFS but had been removed since Hadoop 2.0 for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends on MapReduce to do encoding and decoding tasks; 2) it can only be used for cold files that are intended not to be appended anymore; 3) the pure Java EC coding implementation is extremely slow in practical use. Due to these, it might not be a good idea to just bring HDFS-RAID back. We (Intel and Cloudera) are working on a design to build EC into HDFS that gets rid of any external dependencies, makes it self-contained and independently maintained. This design lays the EC feature on the storage type support and considers compatible with existing HDFS features like caching, snapshot, encryption, high availability and etc. This design will also support different EC coding schemes, implementations and policies for different deployment scenarios. By utilizing advanced libraries (e.g. Intel ISA-L library), an implementation can greatly improve the performance of EC encoding/decoding and makes the EC solution even more attractive. We will post the design document soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8659) Block scanner INFO message is spamming logs
[ https://issues.apache.org/jira/browse/HDFS-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HDFS-8659: Attachment: HDFS-8659.002.patch Block scanner INFO message is spamming logs --- Key: HDFS-8659 URL: https://issues.apache.org/jira/browse/HDFS-8659 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.1 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Labels: supportability Attachments: HDFS-8659.001.patch, HDFS-8659.002.patch We are seeing the following message spam the DN log: {quote} 2015-06-16 08:51:10,566 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Not scanning suspicious block BP-943360218-10.106.148.16-1416571803827:blk_1083076388_9372245 on DS-2ec89056-afb0-459e-b4e0-ac5eaececda3, because the block scanner is disabled. {quote} Create this jira to change this and other relevant messages to debug level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-8659) Block scanner INFO message is spamming logs
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets
[ https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602478#comment-14602478 ] Hadoop QA commented on HDFS-8661: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 18m 41s | Pre-patch HDFS-7240 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 4 new or modified test files. | | {color:green}+1{color} | javac | 7m 49s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 58s | There were no new javadoc warning messages. | | {color:red}-1{color} | release audit | 0m 16s | The applied patch generated 1 release audit warnings. | | {color:red}-1{color} | checkstyle | 2m 23s | The applied patch generated 7 new checkstyle issues (total was 400, now 404). | | {color:red}-1{color} | whitespace | 0m 2s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 36s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 26s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 23s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 167m 49s | Tests failed in hadoop-hdfs. | | | | 216m 2s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting | | Timed out tests | org.apache.hadoop.hdfs.server.mover.TestMover | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742017/HDFS-8661-HDFS-7240.02.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | HDFS-7240 / 845a710 | | Release Audit | https://builds.apache.org/job/PreCommit-HDFS-Build/11494/artifact/patchprocess/patchReleaseAuditProblems.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/11494/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/11494/artifact/patchprocess/whitespace.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11494/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11494/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11494/console | This message was automatically generated. DataNode should filter the set of NameSpaceInfos passed to Datasets --- Key: HDFS-8661 URL: https://issues.apache.org/jira/browse/HDFS-8661 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode Affects Versions: HDFS-7240 Reporter: Arpit Agarwal Assignee: Arpit Agarwal Attachments: HDFS-8661-HDFS-7240.01.patch, HDFS-8661-HDFS-7240.02.patch {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset when adding new volumes. This list should be filtered by the correct NodeType(s) for each dataset. e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block pools and Ozone datasets would be notified of Ozone block pool(s). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8659) Block scanner INFO message is spamming logs
[ https://issues.apache.org/jira/browse/HDFS-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602479#comment-14602479 ] Yongjun Zhang commented on HDFS-8659: - Thanks [~cmccabe] for the review, and [~brahmareddy] for a similar comment in email. I attached rev 002 to address the comment. Plus, I found somehow rev 001 missed an important one in BlockScanner (though I remember I did that change last time), added back in rev 002. Block scanner INFO message is spamming logs --- Key: HDFS-8659 URL: https://issues.apache.org/jira/browse/HDFS-8659 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.1 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Labels: supportability Attachments: HDFS-8659.001.patch, HDFS-8659.002.patch We are seeing the following message spam the DN log: {quote} 2015-06-16 08:51:10,566 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Not scanning suspicious block BP-943360218-10.106.148.16-1416571803827:blk_1083076388_9372245 on DS-2ec89056-afb0-459e-b4e0-ac5eaececda3, because the block scanner is disabled. {quote} Create this jira to change this and other relevant messages to debug level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8659) Block scanner INFO message is spamming logs
您的邮件已收到!谢谢!
Auto-Re: [jira] [Assigned] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX
[ https://issues.apache.org/jira/browse/HDFS-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602486#comment-14602486 ] J.Andreina commented on HDFS-8670: -- I would like to work on this issue. [~mingma], please reassign to you , if you have already started working on this. Better to exclude decommissioned nodes for namenode NodeUsage JMX - Key: HDFS-8670 URL: https://issues.apache.org/jira/browse/HDFS-8670 Project: Hadoop HDFS Issue Type: Bug Reporter: Ming Ma The namenode NodeUsage JMX has Max, Median, Min and Standard Deviation of DataNodes usage, it currently includes decommissioned nodes for the calculation. However, given balancer doesn't work on decommissioned nodes and sometimes we could have nodes stay in decommissioned states for a long time; it might be better to exclude decommissioned nodes for the metrics calculation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX
您的邮件已收到!谢谢!
[jira] [Assigned] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX
[ https://issues.apache.org/jira/browse/HDFS-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.Andreina reassigned HDFS-8670: Assignee: J.Andreina Better to exclude decommissioned nodes for namenode NodeUsage JMX - Key: HDFS-8670 URL: https://issues.apache.org/jira/browse/HDFS-8670 Project: Hadoop HDFS Issue Type: Bug Reporter: Ming Ma Assignee: J.Andreina The namenode NodeUsage JMX has Max, Median, Min and Standard Deviation of DataNodes usage, it currently includes decommissioned nodes for the calculation. However, given balancer doesn't work on decommissioned nodes and sometimes we could have nodes stay in decommissioned states for a long time; it might be better to exclude decommissioned nodes for the metrics calculation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8659) Block scanner INFO message is spamming logs
[ https://issues.apache.org/jira/browse/HDFS-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602488#comment-14602488 ] Brahma Reddy Battula commented on HDFS-8659: [~yzhangal] thanks for updating the patch...rev 002 LGTM ,+1 ( non-binding). Block scanner INFO message is spamming logs --- Key: HDFS-8659 URL: https://issues.apache.org/jira/browse/HDFS-8659 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.1 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Labels: supportability Attachments: HDFS-8659.001.patch, HDFS-8659.002.patch We are seeing the following message spam the DN log: {quote} 2015-06-16 08:51:10,566 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Not scanning suspicious block BP-943360218-10.106.148.16-1416571803827:blk_1083076388_9372245 on DS-2ec89056-afb0-459e-b4e0-ac5eaececda3, because the block scanner is disabled. {quote} Create this jira to change this and other relevant messages to debug level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8659) Block scanner INFO message is spamming logs
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8669) Erasure Coding: handle missing internal block locations in DFSStripedInputStream
[ https://issues.apache.org/jira/browse/HDFS-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602593#comment-14602593 ] Walter Su commented on HDFS-8669: - Patch looks good. It passed simple local test. some minor issues: 1. should be {{}} {code} if (alignedStripe.missingChunksNum = parityBlkNum) { //checkMissingBlocks() {code} 2.should check missing blocks after first round of read. And you add checkMissingBlocks() at the end of readDataForDecoding(), how about add it at the end of readParityChunks(int) either? {code} void readStripe() throws IOException { for (int i = 0; i dataBlkNum; i++) {...} + checkMissingBlocks(); // There are missing block locations at this stage. Thus we need to read {code} Erasure Coding: handle missing internal block locations in DFSStripedInputStream Key: HDFS-8669 URL: https://issues.apache.org/jira/browse/HDFS-8669 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Jing Zhao Assignee: Jing Zhao Attachments: HDFS-8669.000.patch Currently DFSStripedInputStream assumes we always have complete internal block location information, i.e., we can always get all the DataNodes for a striped block group. In a lot of scenarios the client cannot get complete block location info, e.g., some internal blocks are missing and the NameNode has not finished the recovery yet. We should add functionality to handle missing block locations in DFSStripedInputStream. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX
您的邮件已收到!谢谢!
Auto-Re: [jira] [Updated] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8627) NPE thrown if unable to fetch token from Namenode
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8627) NPE thrown if unable to fetch token from Namenode
[ https://issues.apache.org/jira/browse/HDFS-8627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603146#comment-14603146 ] Hadoop QA commented on HDFS-8627: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 17m 32s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 26s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 36s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 22s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 2m 17s | There were no new checkstyle issues. | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 36s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 15s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 14s | Pre-build of native portion | | {color:green}+1{color} | hdfs tests | 161m 42s | Tests passed in hadoop-hdfs. | | | | 207m 34s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742110/HDFS-8627.1.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 8ef07f7 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/11499/artifact/patchprocess/whitespace.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11499/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11499/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11499/console | This message was automatically generated. NPE thrown if unable to fetch token from Namenode - Key: HDFS-8627 URL: https://issues.apache.org/jira/browse/HDFS-8627 Project: Hadoop HDFS Issue Type: Bug Reporter: J.Andreina Assignee: J.Andreina Attachments: HDFS-8627.1.patch DelegationTokenFetcher#saveDelegationToken Missed to check if token is null. {code} Token? token = fs.getDelegationToken(renewer); Credentials cred = new Credentials(); cred.addToken(token.getKind(), token); {code} {noformat} XX:~/hadoop/namenode/bin ./hdfs fetchdt --renewer Rex /home/REX/file1 Exception in thread main java.lang.NullPointerException at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.saveDelegationToken(DelegationTokenFetcher.java:181) at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher$1.run(DelegationTokenFetcher.java:126) at java.security.AccessController.doPrivileged(AccessController.java:314) at javax.security.auth.Subject.doAs(Subject.java:572) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666) at org.apache.hadoop.hdfs.tools.DelegationTokenFetcher.main(DelegationTokenFetcher.java:114) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8675) IBRs from dead DNs go into infinite loop
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets
[ https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-8661: Attachment: HDFS-8661-HDFS-7240.03.patch v3 patch fixes checkstyle issues. DataNode should filter the set of NameSpaceInfos passed to Datasets --- Key: HDFS-8661 URL: https://issues.apache.org/jira/browse/HDFS-8661 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode Affects Versions: HDFS-7240 Reporter: Arpit Agarwal Assignee: Arpit Agarwal Attachments: HDFS-8661-HDFS-7240.01.patch, HDFS-8661-HDFS-7240.02.patch, HDFS-8661-HDFS-7240.03.patch {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset when adding new volumes. This list should be filtered by the correct NodeType(s) for each dataset. e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block pools and Ozone datasets would be notified of Ozone block pool(s). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8659) Block scanner INFO message is spamming logs
[ https://issues.apache.org/jira/browse/HDFS-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602717#comment-14602717 ] Hadoop QA commented on HDFS-8659: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 18m 26s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:green}+1{color} | javac | 7m 53s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 0s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 24s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 2m 26s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 38s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 24s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 21s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 162m 30s | Tests failed in hadoop-hdfs. | | | | 210m 39s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.server.namenode.TestCacheDirectives | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742066/HDFS-8659.002.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 8ef07f7 | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11496/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11496/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11496/console | This message was automatically generated. Block scanner INFO message is spamming logs --- Key: HDFS-8659 URL: https://issues.apache.org/jira/browse/HDFS-8659 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.1 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Labels: supportability Attachments: HDFS-8659.001.patch, HDFS-8659.002.patch We are seeing the following message spam the DN log: {quote} 2015-06-16 08:51:10,566 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Not scanning suspicious block BP-943360218-10.106.148.16-1416571803827:blk_1083076388_9372245 on DS-2ec89056-afb0-459e-b4e0-ac5eaececda3, because the block scanner is disabled. {quote} Create this jira to change this and other relevant messages to debug level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8579) Update HDFS usage with missing options
您的邮件已收到!谢谢!
[jira] [Created] (HDFS-8674) Improve performance of postponed block scans
Daryn Sharp created HDFS-8674: - Summary: Improve performance of postponed block scans Key: HDFS-8674 URL: https://issues.apache.org/jira/browse/HDFS-8674 Project: Hadoop HDFS Issue Type: Improvement Components: HDFS Affects Versions: 2.6.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical When a standby goes active, it marks all nodes as stale which will cause block invalidations for over-replicated blocks to be queued until full block reports are received from the nodes with the block. The replication monitor scans the queue with O(N) runtime. It picks a random offset and iterates through the set to randomize blocks scanned. The result is devastating when a cluster loses multiple nodes during a rolling upgrade. Re-replication occurs, the nodes come back, the excess block invalidations are postponed. Rescanning just 2k blocks out of millions of postponed blocks may take multiple seconds. During the scan, the write lock is held which stalls all other processing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Created] (HDFS-8674) Improve performance of postponed block scans
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
[ https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603138#comment-14603138 ] Ming Ma commented on HDFS-8656: --- +1. Thanks Andrew. The unit test failures are unrelated. Preserve compatibility of ClientProtocol#rollingUpgrade after finalization -- Key: HDFS-8656 URL: https://issues.apache.org/jira/browse/HDFS-8656 Project: Hadoop HDFS Issue Type: Bug Components: rolling upgrades Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Critical Attachments: hdfs-8656.001.patch, hdfs-8656.002.patch, hdfs-8656.003.patch, hdfs-8656.004.patch HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after finalization, so the DNs can differentiate between rollback and a finalization. However, this breaks compatibility for the user facing APIs, which always expect a null after finalization. Let's fix this and edify it in unit tests. As an additional improvement, isFinalized and isStarted are part of the Java API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose these booleans so JMX users don't need to do the != 0 check that possibly exposes our implementation details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
您的邮件已收到!谢谢!
[jira] [Created] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration
Kihwal Lee created HDFS-8676: Summary: Delayed rolling upgrade finalization can cause heartbeat expiration Key: HDFS-8676 URL: https://issues.apache.org/jira/browse/HDFS-8676 Project: Hadoop HDFS Issue Type: Bug Reporter: Kihwal Lee Priority: Critical In big busy clusters where the deletion rate is also high, a lot of blocks can pile up in the datanode trash directories until an upgrade is finalized. When it is finally finalized, the deletion of trash is done in the service actor thread's context synchronously. This blocks the heartbeat and can cause heartbeat expiration. We have seen a namenode losing hundreds of nodes after a delayed upgrade finalization. The deletion of trash directories should be made asynchronous. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Created] (HDFS-8676) Delayed rolling upgrade finalization can cause heartbeat expiration
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8675) IBRs from dead DNs go into infinite loop
[ https://issues.apache.org/jira/browse/HDFS-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603154#comment-14603154 ] Daryn Sharp commented on HDFS-8675: --- The solution isn't as simple as replacing the IOE with UnregisteredNodeException. The DN will commit suicide. Handling the exception will require buffering the IBR (or any other calls this may affect), registering, resending the rejected message. Issue might be related to network issues or defects that stall BPOfferService like the synchronous clearing of the trash after RU finalize. IBRs from dead DNs go into infinite loop Key: HDFS-8675 URL: https://issues.apache.org/jira/browse/HDFS-8675 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Daryn Sharp If the DN sends an IBR after the NN declares it dead, the NN returns an IOE of unregistered or dead. The DN catches the IOE, ignores it, and infinitely loops spamming the NN with retries. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8579) Update HDFS usage with missing options
[ https://issues.apache.org/jira/browse/HDFS-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603126#comment-14603126 ] Hadoop QA commented on HDFS-8579: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 15m 3s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:green}+1{color} | javac | 7m 40s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 46s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 22s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | shellcheck | 0m 6s | There were no new shellcheck (v0.3.3) issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 32s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | native | 3m 17s | Pre-build of native portion | | {color:green}+1{color} | hdfs tests | 159m 26s | Tests passed in hadoop-hdfs. | | | | 197m 47s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742113/HDFS-8579-trunk-1.patch | | Optional Tests | javadoc javac unit shellcheck | | git revision | trunk / 8ef07f7 | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11498/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11498/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11498/console | This message was automatically generated. Update HDFS usage with missing options -- Key: HDFS-8579 URL: https://issues.apache.org/jira/browse/HDFS-8579 Project: Hadoop HDFS Issue Type: Bug Reporter: J.Andreina Assignee: J.Andreina Priority: Minor Attachments: HDFS-8579-branch-2.7-1.patch, HDFS-8579-trunk-1.patch Update hdfs usage with missing options (fetchdt and debug) {noformat} 1 ./hdfs fetchdt fetchdt opts token file Options: --webservice url Url to contact NN on --renewer nameName of the delegation token renewer --cancelCancel the delegation token --renew Renew the delegation token. Delegation token must have been fetched using the --renewer name option. --print Print the delegation token 2 ./hdfs debug Usage: hdfs debug command [arguments] verify [-meta metadata-file] [-block block-file] recoverLease [-path path] [-retries num-retries] {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8675) IBRs from dead DNs go into infinite loop
Daryn Sharp created HDFS-8675: - Summary: IBRs from dead DNs go into infinite loop Key: HDFS-8675 URL: https://issues.apache.org/jira/browse/HDFS-8675 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Daryn Sharp If the DN sends an IBR after the NN declares it dead, the NN returns an IOE of unregistered or dead. The DN catches the IOE, ignores it, and infinitely loops spamming the NN with retries. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Created] (HDFS-8675) IBRs from dead DNs go into infinite loop
您的邮件已收到!谢谢!
Auto-Re: [jira] [Updated] (HDFS-8673) HDFS reports file already exists if there is a file/dir name end with ._COPYING_
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8673) HDFS reports file already exists if there is a file/dir name end with ._COPYING_
[ https://issues.apache.org/jira/browse/HDFS-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He updated HDFS-8673: -- Assignee: (was: Chen He) HDFS reports file already exists if there is a file/dir name end with ._COPYING_ Key: HDFS-8673 URL: https://issues.apache.org/jira/browse/HDFS-8673 Project: Hadoop HDFS Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Chen He Because CLI is using CommandWithDestination.java which add ._COPYING_ to the tail of file name when it does the copy. It will cause problem if there is a file/dir already called *._COPYING_ on HDFS. For file: -bash-4.1$ hadoop fs -put 5M /user/occ/ -bash-4.1$ hadoop fs -mv /user/occ/5M /user/occ/5M._COPYING_ -bash-4.1$ hadoop fs -ls /user/occ/ Found 1 items -rw-r--r-- 1 occ supergroup5242880 2015-06-26 05:16 /user/occ/5M._COPYING_ -bash-4.1$ hadoop fs -put 128K /user/occ/5M -bash-4.1$ hadoop fs -ls /user/occ/ Found 1 items -rw-r--r-- 1 occ supergroup 131072 2015-06-26 05:19 /user/occ/5M For dir: -bash-4.1$ hadoop fs -mkdir /user/occ/5M._COPYING_ -bash-4.1$ hadoop fs -ls /user/occ/ Found 1 items drwxr-xr-x - occ supergroup 0 2015-06-26 05:24 /user/occ/5M._COPYING_ -bash-4.1$ hadoop fs -put 128K /user/occ/5M put: /user/occ/5M._COPYING_ already exists as a directory -bash-4.1$ hadoop fs -ls /user/occ/ (/user/occ/5M._COPYING_ is gone) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
[ https://issues.apache.org/jira/browse/HDFS-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602938#comment-14602938 ] Hudson commented on HDFS-8665: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #229 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/229/]) HDFS-8665. Fix replication check in DFSTestUtils#waitForReplication. (wang: rev ff0e5e572f5dcf7b49381cbe901360f6e171d423) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fix replication check in DFSTestUtils#waitForReplication Key: HDFS-8665 URL: https://issues.apache.org/jira/browse/HDFS-8665 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial Fix For: 2.8.0 Attachments: hdfs-8665.001.patch The check looks at the repl factor set on the file rather than reported # of replica locations. Let's do the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
[ https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602939#comment-14602939 ] Hudson commented on HDFS-8462: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #229 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/229/]) HDFS-8462. Implement GETXATTRS and LISTXATTRS operations for WebImageViewer. Contributed by Jagadesh Kiran N. (aajisaka: rev bc433908d35758ff0a7225cd6f5662959ef4d294) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForXAttr.java * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java Implement GETXATTRS and LISTXATTRS operations for WebImageViewer Key: HDFS-8462 URL: https://issues.apache.org/jira/browse/HDFS-8462 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Jagadesh Kiran N Fix For: 2.8.0 Attachments: HDFS-8462-00.patch, HDFS-8462-01.patch, HDFS-8462-02.patch, HDFS-8462-03.patch, HDFS-8462-04.patch, HDFS-8462-05.patch, HDFS-8462-06.patch In Hadoop 2.7.0, WebImageViewer supports the following operations: * {{GETFILESTATUS}} * {{LISTSTATUS}} * {{GETACLSTATUS}} I'm thinking it would be better for administrators if {{GETXATTRS}} and {{LISTXATTRS}} are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
[ https://issues.apache.org/jira/browse/HDFS-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602940#comment-14602940 ] Hudson commented on HDFS-8640: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #229 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/229/]) HDFS-8640. Make reserved RBW space visible through JMX. (Contributed by kanaka kumar avvaru) (arp: rev 67a62da5c5f592b07d083440ced3666c7709b20d) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java Make reserved RBW space visible through JMX --- Key: HDFS-8640 URL: https://issues.apache.org/jira/browse/HDFS-8640 Project: Hadoop HDFS Issue Type: Improvement Reporter: kanaka kumar avvaru Assignee: kanaka kumar avvaru Fix For: 2.8.0 Attachments: HDFS-8640-00.patch At present there is no way to trace the reserved space for RBW in DataNode to identify any leaks like HDFS-8626 Idea is to add the value to {{VolumeInfo}} so that the {{DataNodeInfo}} JMX bean can be referred to know the present reserved space -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
[ https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602941#comment-14602941 ] Hudson commented on HDFS-8546: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #229 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/229/]) HDFS-8546. Use try with resources in DataStorage and Storage. (wang: rev 1403b84b122fb76ef2b085a728b5402c32499c1f) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java Use try with resources in DataStorage and Storage - Key: HDFS-8546 URL: https://issues.apache.org/jira/browse/HDFS-8546 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor Fix For: 2.8.0 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch, HDFS-8546.003.patch, HDFS-8546.004.patch, HDFS-8546.005.patch, HDFS-8546.006.patch We have some old-style try/finally to close files in DataStorage and Storage, let's update them. Also a few small cleanups: * Actually check that tryLock returns a FileLock in isPreUpgradableLayout * Remove unused parameter from writeProperties * Add braces for one-line if statements per coding style -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
[ https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602952#comment-14602952 ] Hudson commented on HDFS-8546: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #238 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/238/]) HDFS-8546. Use try with resources in DataStorage and Storage. (wang: rev 1403b84b122fb76ef2b085a728b5402c32499c1f) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java Use try with resources in DataStorage and Storage - Key: HDFS-8546 URL: https://issues.apache.org/jira/browse/HDFS-8546 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor Fix For: 2.8.0 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch, HDFS-8546.003.patch, HDFS-8546.004.patch, HDFS-8546.005.patch, HDFS-8546.006.patch We have some old-style try/finally to close files in DataStorage and Storage, let's update them. Also a few small cleanups: * Actually check that tryLock returns a FileLock in isPreUpgradableLayout * Remove unused parameter from writeProperties * Add braces for one-line if statements per coding style -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
[ https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602950#comment-14602950 ] Hudson commented on HDFS-8462: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #238 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/238/]) HDFS-8462. Implement GETXATTRS and LISTXATTRS operations for WebImageViewer. Contributed by Jagadesh Kiran N. (aajisaka: rev bc433908d35758ff0a7225cd6f5662959ef4d294) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForXAttr.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md Implement GETXATTRS and LISTXATTRS operations for WebImageViewer Key: HDFS-8462 URL: https://issues.apache.org/jira/browse/HDFS-8462 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Jagadesh Kiran N Fix For: 2.8.0 Attachments: HDFS-8462-00.patch, HDFS-8462-01.patch, HDFS-8462-02.patch, HDFS-8462-03.patch, HDFS-8462-04.patch, HDFS-8462-05.patch, HDFS-8462-06.patch In Hadoop 2.7.0, WebImageViewer supports the following operations: * {{GETFILESTATUS}} * {{LISTSTATUS}} * {{GETACLSTATUS}} I'm thinking it would be better for administrators if {{GETXATTRS}} and {{LISTXATTRS}} are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
[ https://issues.apache.org/jira/browse/HDFS-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602951#comment-14602951 ] Hudson commented on HDFS-8640: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #238 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/238/]) HDFS-8640. Make reserved RBW space visible through JMX. (Contributed by kanaka kumar avvaru) (arp: rev 67a62da5c5f592b07d083440ced3666c7709b20d) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Make reserved RBW space visible through JMX --- Key: HDFS-8640 URL: https://issues.apache.org/jira/browse/HDFS-8640 Project: Hadoop HDFS Issue Type: Improvement Reporter: kanaka kumar avvaru Assignee: kanaka kumar avvaru Fix For: 2.8.0 Attachments: HDFS-8640-00.patch At present there is no way to trace the reserved space for RBW in DataNode to identify any leaks like HDFS-8626 Idea is to add the value to {{VolumeInfo}} so that the {{DataNodeInfo}} JMX bean can be referred to know the present reserved space -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
[ https://issues.apache.org/jira/browse/HDFS-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602949#comment-14602949 ] Hudson commented on HDFS-8665: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #238 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/238/]) HDFS-8665. Fix replication check in DFSTestUtils#waitForReplication. (wang: rev ff0e5e572f5dcf7b49381cbe901360f6e171d423) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java Fix replication check in DFSTestUtils#waitForReplication Key: HDFS-8665 URL: https://issues.apache.org/jira/browse/HDFS-8665 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial Fix For: 2.8.0 Attachments: hdfs-8665.001.patch The check looks at the repl factor set on the file rather than reported # of replica locations. Let's do the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
[ https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602959#comment-14602959 ] Hudson commented on HDFS-8546: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #2168 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2168/]) HDFS-8546. Use try with resources in DataStorage and Storage. (wang: rev 1403b84b122fb76ef2b085a728b5402c32499c1f) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Use try with resources in DataStorage and Storage - Key: HDFS-8546 URL: https://issues.apache.org/jira/browse/HDFS-8546 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor Fix For: 2.8.0 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch, HDFS-8546.003.patch, HDFS-8546.004.patch, HDFS-8546.005.patch, HDFS-8546.006.patch We have some old-style try/finally to close files in DataStorage and Storage, let's update them. Also a few small cleanups: * Actually check that tryLock returns a FileLock in isPreUpgradableLayout * Remove unused parameter from writeProperties * Add braces for one-line if statements per coding style -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
[ https://issues.apache.org/jira/browse/HDFS-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602958#comment-14602958 ] Hudson commented on HDFS-8640: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #2168 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2168/]) HDFS-8640. Make reserved RBW space visible through JMX. (Contributed by kanaka kumar avvaru) (arp: rev 67a62da5c5f592b07d083440ced3666c7709b20d) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java Make reserved RBW space visible through JMX --- Key: HDFS-8640 URL: https://issues.apache.org/jira/browse/HDFS-8640 Project: Hadoop HDFS Issue Type: Improvement Reporter: kanaka kumar avvaru Assignee: kanaka kumar avvaru Fix For: 2.8.0 Attachments: HDFS-8640-00.patch At present there is no way to trace the reserved space for RBW in DataNode to identify any leaks like HDFS-8626 Idea is to add the value to {{VolumeInfo}} so that the {{DataNodeInfo}} JMX bean can be referred to know the present reserved space -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
[ https://issues.apache.org/jira/browse/HDFS-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602964#comment-14602964 ] Hudson commented on HDFS-8640: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2186 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2186/]) HDFS-8640. Make reserved RBW space visible through JMX. (Contributed by kanaka kumar avvaru) (arp: rev 67a62da5c5f592b07d083440ced3666c7709b20d) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestRbwSpaceReservation.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java Make reserved RBW space visible through JMX --- Key: HDFS-8640 URL: https://issues.apache.org/jira/browse/HDFS-8640 Project: Hadoop HDFS Issue Type: Improvement Reporter: kanaka kumar avvaru Assignee: kanaka kumar avvaru Fix For: 2.8.0 Attachments: HDFS-8640-00.patch At present there is no way to trace the reserved space for RBW in DataNode to identify any leaks like HDFS-8626 Idea is to add the value to {{VolumeInfo}} so that the {{DataNodeInfo}} JMX bean can be referred to know the present reserved space -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
[ https://issues.apache.org/jira/browse/HDFS-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602956#comment-14602956 ] Hudson commented on HDFS-8665: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #2168 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2168/]) HDFS-8665. Fix replication check in DFSTestUtils#waitForReplication. (wang: rev ff0e5e572f5dcf7b49381cbe901360f6e171d423) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java Fix replication check in DFSTestUtils#waitForReplication Key: HDFS-8665 URL: https://issues.apache.org/jira/browse/HDFS-8665 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial Fix For: 2.8.0 Attachments: hdfs-8665.001.patch The check looks at the repl factor set on the file rather than reported # of replica locations. Let's do the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8546) Use try with resources in DataStorage and Storage
[ https://issues.apache.org/jira/browse/HDFS-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602965#comment-14602965 ] Hudson commented on HDFS-8546: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2186 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2186/]) HDFS-8546. Use try with resources in DataStorage and Storage. (wang: rev 1403b84b122fb76ef2b085a728b5402c32499c1f) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java Use try with resources in DataStorage and Storage - Key: HDFS-8546 URL: https://issues.apache.org/jira/browse/HDFS-8546 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Minor Fix For: 2.8.0 Attachments: HDFS-8546.001.patch, HDFS-8546.002.patch, HDFS-8546.003.patch, HDFS-8546.004.patch, HDFS-8546.005.patch, HDFS-8546.006.patch We have some old-style try/finally to close files in DataStorage and Storage, let's update them. Also a few small cleanups: * Actually check that tryLock returns a FileLock in isPreUpgradableLayout * Remove unused parameter from writeProperties * Add braces for one-line if statements per coding style -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
[ https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602963#comment-14602963 ] Hudson commented on HDFS-8462: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2186 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2186/]) HDFS-8462. Implement GETXATTRS and LISTXATTRS operations for WebImageViewer. Contributed by Jagadesh Kiran N. (aajisaka: rev bc433908d35758ff0a7225cd6f5662959ef4d294) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForXAttr.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Implement GETXATTRS and LISTXATTRS operations for WebImageViewer Key: HDFS-8462 URL: https://issues.apache.org/jira/browse/HDFS-8462 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Jagadesh Kiran N Fix For: 2.8.0 Attachments: HDFS-8462-00.patch, HDFS-8462-01.patch, HDFS-8462-02.patch, HDFS-8462-03.patch, HDFS-8462-04.patch, HDFS-8462-05.patch, HDFS-8462-06.patch In Hadoop 2.7.0, WebImageViewer supports the following operations: * {{GETFILESTATUS}} * {{LISTSTATUS}} * {{GETACLSTATUS}} I'm thinking it would be better for administrators if {{GETXATTRS}} and {{LISTXATTRS}} are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
[ https://issues.apache.org/jira/browse/HDFS-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602962#comment-14602962 ] Hudson commented on HDFS-8665: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2186 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2186/]) HDFS-8665. Fix replication check in DFSTestUtils#waitForReplication. (wang: rev ff0e5e572f5dcf7b49381cbe901360f6e171d423) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fix replication check in DFSTestUtils#waitForReplication Key: HDFS-8665 URL: https://issues.apache.org/jira/browse/HDFS-8665 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Trivial Fix For: 2.8.0 Attachments: hdfs-8665.001.patch The check looks at the repl factor set on the file rather than reported # of replica locations. Let's do the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operations for WebImageViewer
[ https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14602957#comment-14602957 ] Hudson commented on HDFS-8462: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #2168 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2168/]) HDFS-8462. Implement GETXATTRS and LISTXATTRS operations for WebImageViewer. Contributed by Jagadesh Kiran N. (aajisaka: rev bc433908d35758ff0a7225cd6f5662959ef4d294) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForXAttr.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java * hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsImageViewer.md Implement GETXATTRS and LISTXATTRS operations for WebImageViewer Key: HDFS-8462 URL: https://issues.apache.org/jira/browse/HDFS-8462 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Jagadesh Kiran N Fix For: 2.8.0 Attachments: HDFS-8462-00.patch, HDFS-8462-01.patch, HDFS-8462-02.patch, HDFS-8462-03.patch, HDFS-8462-04.patch, HDFS-8462-05.patch, HDFS-8462-06.patch In Hadoop 2.7.0, WebImageViewer supports the following operations: * {{GETFILESTATUS}} * {{LISTSTATUS}} * {{GETACLSTATUS}} I'm thinking it would be better for administrators if {{GETXATTRS}} and {{LISTXATTRS}} are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8640) Make reserved RBW space visible through JMX
您的邮件已收到!谢谢!
Auto-Re: [jira] [Commented] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication
您的邮件已收到!谢谢!
Auto-Re: [jira] [Updated] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8651) Make hadoop-hdfs-project Native code -Wall-clean
[ https://issues.apache.org/jira/browse/HDFS-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-8651: --- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed to 2.8. Thanks, Alan. Make hadoop-hdfs-project Native code -Wall-clean Key: HDFS-8651 URL: https://issues.apache.org/jira/browse/HDFS-8651 Project: Hadoop HDFS Issue Type: Sub-task Components: native Affects Versions: 2.7.0 Reporter: Alan Burlison Assignee: Alan Burlison Fix For: 2.8.0 Attachments: HDFS-8651.001.patch As we specify -Wall as a default compilation flag, it would be helpful if the Native code was -Wall-clean -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks
[ https://issues.apache.org/jira/browse/HDFS-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603328#comment-14603328 ] Zhe Zhang commented on HDFS-8623: - Thank Jing for the helpful reviews! Refactor NameNode handling of invalid, corrupt, and under-recovery blocks - Key: HDFS-8623 URL: https://issues.apache.org/jira/browse/HDFS-8623 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: 2.7.0 Reporter: Zhe Zhang Assignee: Zhe Zhang Fix For: 2.8.0 Attachments: HDFS-8623.00.patch, HDFS-8623.01.patch, HDFS-8623.02.patch, HDFS-8623.03.patch, HDFS-8623.04.patch, HDFS-8623.05.patch, HDFS-8623.06.patch In order to support striped blocks in invalid, corrupt, and under-recovery blocks handling, HDFS-7907 introduces some refactors. This JIRA aims to merge these changes to trunk first to minimize and cleanup HDFS-7285 merge patch so that it only contains striping/EC logic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8659) Block scanner INFO message is spamming logs
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8659) Block scanner INFO message is spamming logs
[ https://issues.apache.org/jira/browse/HDFS-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603370#comment-14603370 ] Yongjun Zhang commented on HDFS-8659: - Thanks [~brahmareddy]. Hi [~cmccabe], I ran the failed test TestCacheDirectives successfully at local machine. Would you please take a second look? thanks. Block scanner INFO message is spamming logs --- Key: HDFS-8659 URL: https://issues.apache.org/jira/browse/HDFS-8659 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.7.1 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Labels: supportability Attachments: HDFS-8659.001.patch, HDFS-8659.002.patch We are seeing the following message spam the DN log: {quote} 2015-06-16 08:51:10,566 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Not scanning suspicious block BP-943360218-10.106.148.16-1416571803827:blk_1083076388_9372245 on DS-2ec89056-afb0-459e-b4e0-ac5eaececda3, because the block scanner is disabled. {quote} Create this jira to change this and other relevant messages to debug level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
[ https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-8656: -- Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Pushed to trunk and branch-2, thanks Ming for reviewing! Preserve compatibility of ClientProtocol#rollingUpgrade after finalization -- Key: HDFS-8656 URL: https://issues.apache.org/jira/browse/HDFS-8656 Project: Hadoop HDFS Issue Type: Bug Components: rolling upgrades Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Critical Fix For: 2.8.0 Attachments: hdfs-8656.001.patch, hdfs-8656.002.patch, hdfs-8656.003.patch, hdfs-8656.004.patch HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after finalization, so the DNs can differentiate between rollback and a finalization. However, this breaks compatibility for the user facing APIs, which always expect a null after finalization. Let's fix this and edify it in unit tests. As an additional improvement, isFinalized and isStarted are part of the Java API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose these booleans so JMX users don't need to do the != 0 check that possibly exposes our implementation details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8664) Allow wildcards in dfs.datanode.data.dir
您的邮件已收到!谢谢!
Auto-Re: [jira] [Updated] (HDFS-8651) Make hadoop-hdfs-project Native code -Wall-clean
您的邮件已收到!谢谢!
Auto-Re: [jira] [Updated] (HDFS-8674) Improve performance of postponed block scans
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks
[ https://issues.apache.org/jira/browse/HDFS-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-8623: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Thanks for the confirmation, Zhe! +1 for the latest patch. I've committed it to trunk and branch-2. Refactor NameNode handling of invalid, corrupt, and under-recovery blocks - Key: HDFS-8623 URL: https://issues.apache.org/jira/browse/HDFS-8623 Project: Hadoop HDFS Issue Type: New Feature Affects Versions: 2.7.0 Reporter: Zhe Zhang Assignee: Zhe Zhang Fix For: 2.8.0 Attachments: HDFS-8623.00.patch, HDFS-8623.01.patch, HDFS-8623.02.patch, HDFS-8623.03.patch, HDFS-8623.04.patch, HDFS-8623.05.patch, HDFS-8623.06.patch In order to support striped blocks in invalid, corrupt, and under-recovery blocks handling, HDFS-7907 introduces some refactors. This JIRA aims to merge these changes to trunk first to minimize and cleanup HDFS-7285 merge patch so that it only contains striping/EC logic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8664) Allow wildcards in dfs.datanode.data.dir
[ https://issues.apache.org/jira/browse/HDFS-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603236#comment-14603236 ] Patrick White commented on HDFS-8664: - interesting, lemme go over those tests. Allow wildcards in dfs.datanode.data.dir Key: HDFS-8664 URL: https://issues.apache.org/jira/browse/HDFS-8664 Project: Hadoop HDFS Issue Type: New Feature Components: datanode, HDFS Affects Versions: 3.0.0 Reporter: Patrick White Assignee: Patrick White Attachments: HDFS-8664.001.patch We have many disks per machine (12+) that don't always have the same numbering when they come back from provisioning, but they're always in the same tree following the same pattern. It would greatly reduce our config complexity to be able to specify a wildcard for all the data directories. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8651) Make hadoop-hdfs-project Native code -Wall-clean
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8651) Make hadoop-hdfs-project Native code -Wall-clean
[ https://issues.apache.org/jira/browse/HDFS-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603265#comment-14603265 ] Hudson commented on HDFS-8651: -- FAILURE: Integrated in Hadoop-trunk-Commit #8073 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8073/]) HDFS-8651. Make hadoop-hdfs-project Native code -Wall-clean (Alan Burlison via Colin P. McCabe) (cmccabe: rev 1b764a01fd8010cf9660eb378977a1b2b81f330a) * hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_open.c * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Make hadoop-hdfs-project Native code -Wall-clean Key: HDFS-8651 URL: https://issues.apache.org/jira/browse/HDFS-8651 Project: Hadoop HDFS Issue Type: Sub-task Components: native Affects Versions: 2.7.0 Reporter: Alan Burlison Assignee: Alan Burlison Fix For: 2.8.0 Attachments: HDFS-8651.001.patch As we specify -Wall as a default compilation flag, it would be helpful if the Native code was -Wall-clean -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-8674) Improve performance of postponed block scans
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8674) Improve performance of postponed block scans
[ https://issues.apache.org/jira/browse/HDFS-8674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HDFS-8674: -- Attachment: HDFS-8674.patch No additional tests due to simple performance optimization that existing tests cover. Improve performance of postponed block scans Key: HDFS-8674 URL: https://issues.apache.org/jira/browse/HDFS-8674 Project: Hadoop HDFS Issue Type: Improvement Components: HDFS Affects Versions: 2.6.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Attachments: HDFS-8674.patch When a standby goes active, it marks all nodes as stale which will cause block invalidations for over-replicated blocks to be queued until full block reports are received from the nodes with the block. The replication monitor scans the queue with O(N) runtime. It picks a random offset and iterates through the set to randomize blocks scanned. The result is devastating when a cluster loses multiple nodes during a rolling upgrade. Re-replication occurs, the nodes come back, the excess block invalidations are postponed. Rescanning just 2k blocks out of millions of postponed blocks may take multiple seconds. During the scan, the write lock is held which stalls all other processing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8674) Improve performance of postponed block scans
[ https://issues.apache.org/jira/browse/HDFS-8674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HDFS-8674: -- Status: Patch Available (was: Open) Improve performance of postponed block scans Key: HDFS-8674 URL: https://issues.apache.org/jira/browse/HDFS-8674 Project: Hadoop HDFS Issue Type: Improvement Components: HDFS Affects Versions: 2.6.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Attachments: HDFS-8674.patch When a standby goes active, it marks all nodes as stale which will cause block invalidations for over-replicated blocks to be queued until full block reports are received from the nodes with the block. The replication monitor scans the queue with O(N) runtime. It picks a random offset and iterates through the set to randomize blocks scanned. The result is devastating when a cluster loses multiple nodes during a rolling upgrade. Re-replication occurs, the nodes come back, the excess block invalidations are postponed. Rescanning just 2k blocks out of millions of postponed blocks may take multiple seconds. During the scan, the write lock is held which stalls all other processing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-8623) Refactor NameNode handling of invalid, corrupt, and under-recovery blocks
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets
[ https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603567#comment-14603567 ] Hadoop QA commented on HDFS-8661: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 17m 47s | Pre-patch HDFS-7240 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 4 new or modified test files. | | {color:green}+1{color} | javac | 7m 28s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 36s | There were no new javadoc warning messages. | | {color:red}-1{color} | release audit | 0m 18s | The applied patch generated 1 release audit warnings. | | {color:red}-1{color} | checkstyle | 2m 16s | The applied patch generated 3 new checkstyle issues (total was 399, now 399). | | {color:red}-1{color} | whitespace | 0m 2s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 31s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 18s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 14s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 164m 31s | Tests failed in hadoop-hdfs. | | | | 210m 40s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate | | | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12742158/HDFS-8661-HDFS-7240.03.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | HDFS-7240 / 845a710 | | Release Audit | https://builds.apache.org/job/PreCommit-HDFS-Build/11501/artifact/patchprocess/patchReleaseAuditProblems.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/11501/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/11501/artifact/patchprocess/whitespace.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11501/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11501/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11501/console | This message was automatically generated. DataNode should filter the set of NameSpaceInfos passed to Datasets --- Key: HDFS-8661 URL: https://issues.apache.org/jira/browse/HDFS-8661 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode Affects Versions: HDFS-7240 Reporter: Arpit Agarwal Assignee: Arpit Agarwal Attachments: HDFS-8661-HDFS-7240.01.patch, HDFS-8661-HDFS-7240.02.patch, HDFS-8661-HDFS-7240.03.patch {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset when adding new volumes. This list should be filtered by the correct NodeType(s) for each dataset. e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block pools and Ozone datasets would be notified of Ozone block pool(s). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-8679) Move DatasetSpi to new package
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8679) Move DatasetSpi to new package
[ https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-8679: Description: The DatasetSpi and VolumeSpi interfaces are currently in {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}. (was: The DataetSpi and VolumeSpi interfaces are currently in {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.) Move DatasetSpi to new package -- Key: HDFS-8679 URL: https://issues.apache.org/jira/browse/HDFS-8679 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode Reporter: Arpit Agarwal Assignee: Arpit Agarwal The DatasetSpi and VolumeSpi interfaces are currently in {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets
[ https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603573#comment-14603573 ] Arpit Agarwal edited comment on HDFS-8661 at 6/26/15 9:13 PM: -- - whitespace, audit warnings and unit test failures are unrelated to the patch. - The checkstyle warning about the TODO will be fixed in the branch before merging to trunk. was (Author: arpitagarwal): - whitespace, checkstyle, audit warnings are unrelated to the patch. - The checkstyle warning about the TODO will be fixed in the branch before merging to trunk. DataNode should filter the set of NameSpaceInfos passed to Datasets --- Key: HDFS-8661 URL: https://issues.apache.org/jira/browse/HDFS-8661 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode Affects Versions: HDFS-7240 Reporter: Arpit Agarwal Assignee: Arpit Agarwal Attachments: HDFS-8661-HDFS-7240.01.patch, HDFS-8661-HDFS-7240.02.patch, HDFS-8661-HDFS-7240.03.patch {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset when adding new volumes. This list should be filtered by the correct NodeType(s) for each dataset. e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block pools and Ozone datasets would be notified of Ozone block pool(s). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Comment Edited] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8654) OzoneHandler : Add ACL support
[ https://issues.apache.org/jira/browse/HDFS-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-8654: Status: Patch Available (was: Open) OzoneHandler : Add ACL support -- Key: HDFS-8654 URL: https://issues.apache.org/jira/browse/HDFS-8654 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Anu Engineer Assignee: Anu Engineer Attachments: hdfs-8654-HDFS-7240.001.patch Add ACL support which is needed by Ozone Buckets -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-8654) OzoneHandler : Add ACL support
您的邮件已收到!谢谢!
[jira] [Commented] (HDFS-8656) Preserve compatibility of ClientProtocol#rollingUpgrade after finalization
[ https://issues.apache.org/jira/browse/HDFS-8656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14603421#comment-14603421 ] Hudson commented on HDFS-8656: -- FAILURE: Integrated in Hadoop-trunk-Commit #8074 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8074/]) HDFS-8656. Preserve compatibility of ClientProtocol#rollingUpgrade after finalization. (wang: rev 60b858bfa65e0feb665e1a84784a3d45e9091c66) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Preserve compatibility of ClientProtocol#rollingUpgrade after finalization -- Key: HDFS-8656 URL: https://issues.apache.org/jira/browse/HDFS-8656 Project: Hadoop HDFS Issue Type: Bug Components: rolling upgrades Affects Versions: 2.8.0 Reporter: Andrew Wang Assignee: Andrew Wang Priority: Critical Fix For: 2.8.0 Attachments: hdfs-8656.001.patch, hdfs-8656.002.patch, hdfs-8656.003.patch, hdfs-8656.004.patch HDFS-7645 changed rollingUpgradeInfo to still return an RUInfo after finalization, so the DNs can differentiate between rollback and a finalization. However, this breaks compatibility for the user facing APIs, which always expect a null after finalization. Let's fix this and edify it in unit tests. As an additional improvement, isFinalized and isStarted are part of the Java API, but not in the JMX output of RollingUpgradeInfo. It'd be nice to expose these booleans so JMX users don't need to do the != 0 check that possibly exposes our implementation details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8356) Document missing properties in hdfs-default.xml
[ https://issues.apache.org/jira/browse/HDFS-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HDFS-8356: - Attachment: HDFS-8356.002.patch - Remove newlines from empty values to allow more unit tests to pass - Add appropriate default values for *.startup properties - Still lots of properties to document and add default values Document missing properties in hdfs-default.xml --- Key: HDFS-8356 URL: https://issues.apache.org/jira/browse/HDFS-8356 Project: Hadoop HDFS Issue Type: Bug Components: documentation, HDFS, test Affects Versions: 2.7.0 Reporter: Ray Chiang Assignee: Ray Chiang Labels: supportability, test Attachments: HDFS-8356.001.patch, HDFS-8356.002.patch The following properties are currently not defined in hdfs-default.xml. These properties should either be A) documented in hdfs-default.xml OR B) listed as an exception (with comments, e.g. for internal use) in the TestHdfsConfigFields unit test -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-8677) Introduce StorageContainerDatasetSpi
Arpit Agarwal created HDFS-8677: --- Summary: Introduce StorageContainerDatasetSpi Key: HDFS-8677 URL: https://issues.apache.org/jira/browse/HDFS-8677 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Arpit Agarwal Assignee: Arpit Agarwal StorageContainerDatasetSpi will be a new interface for Ozone containers, just as FsDatasetSpi is an interface for manipulating HDFS block files. The interface will have support for both key-value containers for storing Ozone metadata and blobs for storing user data. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Updated] (HDFS-7390) Provide JMX metrics per storage type
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-7390) Provide JMX metrics per storage type
[ https://issues.apache.org/jira/browse/HDFS-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benoy Antony updated HDFS-7390: --- Attachment: HDFS-7390-008.patch Attaching the patch which addresses [~arpitagarwal]'s comments. Provide JMX metrics per storage type Key: HDFS-7390 URL: https://issues.apache.org/jira/browse/HDFS-7390 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 2.5.2 Reporter: Benoy Antony Assignee: Benoy Antony Labels: BB2015-05-TBR Attachments: HDFS-7390-003.patch, HDFS-7390-004.patch, HDFS-7390-005.patch, HDFS-7390-006.patch, HDFS-7390-007.patch, HDFS-7390-008.patch, HDFS-7390.patch, HDFS-7390.patch HDFS-2832 added heterogeneous support. In a cluster with different storage types, it is useful to have metrics per storage type. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Auto-Re: [jira] [Commented] (HDFS-7390) Provide JMX metrics per storage type
您的邮件已收到!谢谢!
[jira] [Updated] (HDFS-8670) Better to exclude decommissioned nodes for namenode NodeUsage JMX
[ https://issues.apache.org/jira/browse/HDFS-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.Andreina updated HDFS-8670: - Status: Patch Available (was: Open) Better to exclude decommissioned nodes for namenode NodeUsage JMX - Key: HDFS-8670 URL: https://issues.apache.org/jira/browse/HDFS-8670 Project: Hadoop HDFS Issue Type: Bug Reporter: Ming Ma Assignee: J.Andreina Attachments: HDFS-8670.1.patch The namenode NodeUsage JMX has Max, Median, Min and Standard Deviation of DataNodes usage, it currently includes decommissioned nodes for the calculation. However, given balancer doesn't work on decommissioned nodes and sometimes we could have nodes stay in decommissioned states for a long time; it might be better to exclude decommissioned nodes for the metrics calculation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)