[jira] [Commented] (HDFS-6955) DN should reserve disk space for a full block when creating tmp files
[ https://issues.apache.org/jira/browse/HDFS-6955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730558#comment-14730558 ] Vinayakumar B commented on HDFS-6955: - Thanks [~kanaka] for taking up the issue. Patch looks almost good. Some nits. 1. I think releasing reservation in {{cleanupBlock()}} not required. As by the time this method is called in {{BlockReceiver}}, already reserved space will be released. So this would release again. 2. Since Now reservation is for both RBW and Temp files, renaming {{TestRbwSpaceReservation.java}} to something like {{TestSpaceReservation.java}} would be a good idea IMO. If you feel existing is fine, then no issues. [~arpitagarwal], Do you also want to take a look at the patch.? > DN should reserve disk space for a full block when creating tmp files > - > > Key: HDFS-6955 > URL: https://issues.apache.org/jira/browse/HDFS-6955 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.5.0 >Reporter: Arpit Agarwal >Assignee: Kanaka Kumar Avvaru > Attachments: HDFS-6955-01.patch, HDFS-6955-02.patch, > HDFS-6955-03.patch, HDFS-6955-04.patch > > > HDFS-6898 is introducing disk space reservation for RBW files to avoid > running out of disk space midway through block creation. > This Jira is to introduce similar reservation for tmp files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8946) Improve choosing datanode storage for block placement
[ https://issues.apache.org/jira/browse/HDFS-8946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730727#comment-14730727 ] Surendra Singh Lilhore commented on HDFS-8946: -- *One suggestion :* In this jira some logs are removed. 1. {code} if (requiredSize > remaining - scheduledSize) { logNodeIsNotChosen(storage, "the node does not have enough " + storage.getStorageType() + " space" + " (required=" + requiredSize + ", scheduled=" + scheduledSize + ", remaining=" + remaining + ")"); return false; } {code} 2. {code} logNodeIsNotChosen(storage, "storage is read-only"); {code} 3. {code} logNodeIsNotChosen(storage, "storage has failed"); {code} I think this is very important for debugging issue. From debug log we will get to know "why DN is not selected for block replication". Can we add this back? At least first log should be there. In HDFS-9023 some log improvement I want to do, when NN is not able to identify DN's for replication. > Improve choosing datanode storage for block placement > - > > Key: HDFS-8946 > URL: https://issues.apache.org/jira/browse/HDFS-8946 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yi Liu >Assignee: Yi Liu > Fix For: 2.8.0 > > Attachments: HDFS-8946.001.patch, HDFS-8946.002.patch, > HDFS-8946.003.patch > > > This JIRA is to: > Improve chooseing datanode storage for block placement: > In {{BlockPlacementPolicyDefault}} ({{chooseLocalStorage}}, > {{chooseRandom}}), we have following logic to choose datanode storage to > place block. > For given storage type, we iterate storages of the datanode. But for > datanode, it only cares about the storage type. In the loop, we check > according to Storage type and return the first storage if the storages of the > type on the datanode fit in requirement. So we can remove the iteration of > storages, and just need to do once to find a good storage of given type, it's > efficient if the storages of the type on the datanode don't fit in > requirement since we don't need to loop all storages and do the same check. > Besides, no need to shuffle the storages, since we only need to check > according to the storage type on the datanode once. > This also improves the logic and make it more clear. > {code} > if (excludedNodes.add(localMachine) // was not in the excluded list > && isGoodDatanode(localDatanode, maxNodesPerRack, false, > results, avoidStaleNodes)) { > for (Iterator> iter = storageTypes > .entrySet().iterator(); iter.hasNext(); ) { > Map.Entry entry = iter.next(); > for (DatanodeStorageInfo localStorage : DFSUtil.shuffle( > localDatanode.getStorageInfos())) { > StorageType type = entry.getKey(); > if (addIfIsGoodTarget(localStorage, excludedNodes, blocksize, > results, type) >= 0) { > int num = entry.getValue(); > ... > {code} > (current logic above) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8960) DFS client says "no more good datanodes being available to try" on a single drive failure
[ https://issues.apache.org/jira/browse/HDFS-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730699#comment-14730699 ] Hudson commented on HDFS-8960: -- FAILURE: Integrated in HBase-TRUNK #6780 (See [https://builds.apache.org/job/HBase-TRUNK/6780/]) HBASE-14317 Stuck FSHLog: bad disk (HDFS-8960) and can't roll WAL; addendum2 -- found a fix testing the branch-1 patch (stack: rev ec4d719f1927576d3de321c2e380e4c4acd099db) * hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java > DFS client says "no more good datanodes being available to try" on a single > drive failure > - > > Key: HDFS-8960 > URL: https://issues.apache.org/jira/browse/HDFS-8960 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Affects Versions: 2.7.1 > Environment: openjdk version "1.8.0_45-internal" > OpenJDK Runtime Environment (build 1.8.0_45-internal-b14) > OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode) >Reporter: Benoit Sigoure > Attachments: blk_1073817519_77099.log, r12s13-datanode.log, > r12s16-datanode.log > > > Since we upgraded to 2.7.1 we regularly see single-drive failures cause > widespread problems at the HBase level (with the default 3x replication > target). > Here's an example. This HBase RegionServer is r12s16 (172.24.32.16) and is > writing its WAL to [172.24.32.16:10110, 172.24.32.8:10110, > 172.24.32.13:10110] as can be seen by the following occasional messages: > {code} > 2015-08-23 06:28:40,272 INFO [sync.3] wal.FSHLog: Slow sync cost: 123 ms, > current pipeline: [172.24.32.16:10110, 172.24.32.8:10110, 172.24.32.13:10110] > {code} > A bit later, the second node in the pipeline above is going to experience an > HDD failure. > {code} > 2015-08-23 07:21:58,720 WARN [DataStreamer for file > /hbase/WALs/r12s16.sjc.aristanetworks.com,9104,1439917659071/r12s16.sjc.aristanetworks.com%2C9104%2C1439917659071.default.1440314434998 > block BP-1466258523-172.24.32.1-1437768622582:blk_1073817519_77099] > hdfs.DFSClient: Error Recovery for block > BP-1466258523-172.24.32.1-1437768622582:blk_1073817519_77099 in pipeline > 172.24.32.16:10110, 172.24.32.13:10110, 172.24.32.8:10110: bad datanode > 172.24.32.8:10110 > {code} > And then HBase will go like "omg I can't write to my WAL, let me commit > suicide". > {code} > 2015-08-23 07:22:26,060 FATAL > [regionserver/r12s16.sjc.aristanetworks.com/172.24.32.16:9104.append-pool1-t1] > wal.FSHLog: Could not append. Requesting close of wal > java.io.IOException: Failed to replace a bad datanode on the existing > pipeline due to no more good datanodes being available to try. (Nodes: > current=[172.24.32.16:10110, 172.24.32.13:10110], > original=[172.24.32.16:10110, 172.24.32.13:10110]). The current failed > datanode replacement policy is DEFAULT, and a client may configure this via > 'dfs.client.block.write.replace-datanode-on-failure.policy' in its > configuration. > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:969) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1035) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1184) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:933) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487) > {code} > Whereas this should be mostly a non-event as the DFS client should just drop > the bad replica from the write pipeline. > This is a small cluster but has 16 DNs so the failed DN in the pipeline > should be easily replaced. I didn't set > {{dfs.client.block.write.replace-datanode-on-failure.policy}} (so it's still > {{DEFAULT}}) and didn't set > {{dfs.client.block.write.replace-datanode-on-failure.enable}} (so it's still > {{true}}). > I don't see anything noteworthy in the NN log around the time of the failure, > it just seems like the DFS client gave up or threw an exception back to HBase > that it wasn't throwing before or something else, and that made this single > drive failure lethal. > We've occasionally be "unlucky" enough to have a single-drive failure cause > multiple RegionServers to commit suicide because they had their WALs on that > drive. > We upgraded from 2.7.0 about a month ago, and I'm not sure whether we were > seeing this with 2.7 or not – prior to that we were running in a quite > different environment, but this is a fairly new deployment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots
[ https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730629#comment-14730629 ] Jagadesh Kiran N commented on HDFS-8986: Hi [~ggop] ,IMO snapshots are not treated as normal directories, it will have admin privileges . Hence no need to mention normal user that this is snapshot directory. Please let me know if you have any specific usecase for the same or Please share any inputs if you have already thought of. > Add option to -du to calculate directory space usage excluding snapshots > > > Key: HDFS-8986 > URL: https://issues.apache.org/jira/browse/HDFS-8986 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Reporter: Gautam Gopalakrishnan >Assignee: Jagadesh Kiran N > > When running {{hadoop fs -du}} on a snapshotted directory (or one of its > children), the report includes space consumed by blocks that are only present > in the snapshots. This is confusing for end users. > {noformat} > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -createSnapshot /tmp/parent snap1 > Created snapshot /tmp/parent/.snapshot/snap1 > $ hadoop fs -rm -skipTrash /tmp/parent/sub1/* > ... > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -deleteSnapshot /tmp/parent snap1 > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 0 0 /tmp/parent > 0 0 /tmp/parent/sub1 > {noformat} > It would be helpful if we had a flag, say -X, to exclude any snapshot related > disk usage in the output -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9023) When NN is not able to identify DN for replication, reason behind it can be logged
Surendra Singh Lilhore created HDFS-9023: Summary: When NN is not able to identify DN for replication, reason behind it can be logged Key: HDFS-9023 URL: https://issues.apache.org/jira/browse/HDFS-9023 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs-client, namenode Affects Versions: 2.7.1 Reporter: Surendra Singh Lilhore Assignee: Surendra Singh Lilhore Priority: Critical When NN is not able to identify DN for replication, reason behind it can be logged (at least critical information why DNs not chosen like disk is full). At present it is expected to enable debug log. For example the reason for below error looks like all 7 DNs are busy for data writes. But at client or NN side no hint is given in the log message. {noformat} File /tmp/logs/spark/logs/application_1437051383180_0610/xyz-195_26009.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 7 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1553) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9024) Deprecate TotalFiles metric
[ https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HDFS-9024: Status: Patch Available (was: Open) > Deprecate TotalFiles metric > --- > > Key: HDFS-9024 > URL: https://issues.apache.org/jira/browse/HDFS-9024 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA > Labels: metrics > Attachments: HDFS-9024.001.patch > > > There are two metrics (TotalFiles and FilesTotal) which are the same. In > HDFS-5165, we decided to remove TotalFiles but we need to deprecate the > metric before removing it. This issue is to deprecate the metric. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-5165) FSNameSystem TotalFiles and FilesTotal metrics are the same
[ https://issues.apache.org/jira/browse/HDFS-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730582#comment-14730582 ] Akira AJISAKA commented on HDFS-5165: - Filed HDFS-9024. > FSNameSystem TotalFiles and FilesTotal metrics are the same > --- > > Key: HDFS-5165 > URL: https://issues.apache.org/jira/browse/HDFS-5165 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.1.0-beta >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA >Priority: Minor > Labels: BB2015-05-TBR, metrics, newbie > Attachments: HDFS-5165.2.patch, HDFS-5165.patch > > > Both FSNameSystem TotalFiles and FilesTotal metrics mean total files/dirs in > the cluster. One of these metrics should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9024) Deprecate TotalFiles metric
[ https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HDFS-9024: Attachment: HDFS-9024.001.patch > Deprecate TotalFiles metric > --- > > Key: HDFS-9024 > URL: https://issues.apache.org/jira/browse/HDFS-9024 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA > Labels: metrics > Attachments: HDFS-9024.001.patch > > > There are two metrics (TotalFiles and FilesTotal) which are the same. In > HDFS-5165, we decided to remove TotalFiles but we need to deprecate the > metric before removing it. This issue is to deprecate the metric. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8698) Add "-direct" flag option for fs copy so that user can choose not to create "._COPYING_" file
[ https://issues.apache.org/jira/browse/HDFS-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730697#comment-14730697 ] Hadoop QA commented on HDFS-8698: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 16m 39s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 2 new or modified test files. | | {color:green}+1{color} | javac | 7m 41s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 50s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 9s | The applied patch generated 5 new checkstyle issues (total was 60, now 64). | | {color:red}-1{color} | whitespace | 0m 0s | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 28s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 1m 50s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | common tests | 22m 49s | Tests passed in hadoop-common. | | | | 62m 27s | | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754181/HDFS-8698.2.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 40d222e | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/12306/artifact/patchprocess/diffcheckstylehadoop-common.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/12306/artifact/patchprocess/whitespace.txt | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12306/artifact/patchprocess/testrun_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12306/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12306/console | This message was automatically generated. > Add "-direct" flag option for fs copy so that user can choose not to create > "._COPYING_" file > - > > Key: HDFS-8698 > URL: https://issues.apache.org/jira/browse/HDFS-8698 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.0 >Reporter: Chen He >Assignee: J.Andreina > Attachments: HDFS-8698.1.patch, HDFS-8698.2.patch > > > Because CLI is using CommandWithDestination.java which add "._COPYING_" to > the tail of file name when it does the copy. For blobstore like S3 and Swift, > to create "._COPYING_" file and rename it is expensive. "-direct" flag can > allow user to avoiding the "._COPYING_" file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9023) When NN is not able to identify DN for replication, reason behind it can be logged
[ https://issues.apache.org/jira/browse/HDFS-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730719#comment-14730719 ] Surendra Singh Lilhore commented on HDFS-9023: -- In log we can give some extra info to client like {{READ_ONLY=10, NO_SPACE=5, FAILED=4}} OR {{All required storage types are unavailable.}} > When NN is not able to identify DN for replication, reason behind it can be > logged > -- > > Key: HDFS-9023 > URL: https://issues.apache.org/jira/browse/HDFS-9023 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client, namenode >Affects Versions: 2.7.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Critical > > When NN is not able to identify DN for replication, reason behind it can be > logged (at least critical information why DNs not chosen like disk is full). > At present it is expected to enable debug log. > For example the reason for below error looks like all 7 DNs are busy for data > writes. But at client or NN side no hint is given in the log message. > {noformat} > File /tmp/logs/spark/logs/application_1437051383180_0610/xyz-195_26009.tmp > could only be replicated to 0 nodes instead of minReplication (=1). There > are 7 datanode(s) running and no node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1553) > > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-5165) FSNameSystem TotalFiles and FilesTotal metrics are the same
[ https://issues.apache.org/jira/browse/HDFS-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730577#comment-14730577 ] Akira AJISAKA commented on HDFS-5165: - Thanks [~ozawa] for looking into this. Rethinking this, we need to deprecate TotalFiles before removing it. I'll file a separate jira for doing this. > FSNameSystem TotalFiles and FilesTotal metrics are the same > --- > > Key: HDFS-5165 > URL: https://issues.apache.org/jira/browse/HDFS-5165 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.1.0-beta >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA >Priority: Minor > Labels: BB2015-05-TBR, metrics, newbie > Attachments: HDFS-5165.2.patch, HDFS-5165.patch > > > Both FSNameSystem TotalFiles and FilesTotal metrics mean total files/dirs in > the cluster. One of these metrics should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9024) Deprecate TotalFiles metric
Akira AJISAKA created HDFS-9024: --- Summary: Deprecate TotalFiles metric Key: HDFS-9024 URL: https://issues.apache.org/jira/browse/HDFS-9024 Project: Hadoop HDFS Issue Type: Improvement Reporter: Akira AJISAKA Assignee: Akira AJISAKA There are two metrics (TotalFiles and FilesTotal) which are the same. In HDFS-5165, we decided to remove TotalFiles but we need to deprecate the metric before removing it. This issue is to deprecate the metric. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8698) Add "-direct" flag option for fs copy so that user can choose not to create "._COPYING_" file
[ https://issues.apache.org/jira/browse/HDFS-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.Andreina updated HDFS-8698: - Attachment: HDFS-8698.2.patch Updated the patch Please review. > Add "-direct" flag option for fs copy so that user can choose not to create > "._COPYING_" file > - > > Key: HDFS-8698 > URL: https://issues.apache.org/jira/browse/HDFS-8698 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.0 >Reporter: Chen He >Assignee: J.Andreina > Attachments: HDFS-8698.1.patch, HDFS-8698.2.patch > > > Because CLI is using CommandWithDestination.java which add "._COPYING_" to > the tail of file name when it does the copy. For blobstore like S3 and Swift, > to create "._COPYING_" file and rename it is expensive. "-direct" flag can > allow user to avoiding the "._COPYING_" file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730603#comment-14730603 ] Hadoop QA commented on HDFS-8384: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 17m 21s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:green}+1{color} | javac | 7m 41s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 55s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 20s | The applied patch generated 1 new checkstyle issues (total was 273, now 273). | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 28s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 28s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 10s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 176m 58s | Tests failed in hadoop-hdfs. | | | | 221m 20s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.qjournal.client.TestQuorumJournalManager | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754045/HDFS-8384.000.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / c83d13c | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/12304/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12304/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12304/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12304/console | This message was automatically generated. > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Attachments: HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8581) count cmd calculate wrong when huge files exist in one folder
[ https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730659#comment-14730659 ] J.Andreina commented on HDFS-8581: -- Test Case failures are not related to this patch Please review > count cmd calculate wrong when huge files exist in one folder > - > > Key: HDFS-8581 > URL: https://issues.apache.org/jira/browse/HDFS-8581 > Project: Hadoop HDFS > Issue Type: Bug > Components: HDFS >Reporter: tongshiquan >Assignee: J.Andreina >Priority: Minor > Attachments: HDFS-8581.1.patch, HDFS-8581.2.patch, HDFS-8581.3.patch > > > If one directory such as "/result" exists about 20 files, then when > execute "hdfs dfs -count /", the result will go wrong. For all directories > whose name after "/result", file num will not be included. > My cluster see as below, "/result_1433858936" is the directory exist huge > files, and files in "/sparkJobHistory", "/tmp", "/user" are not included > vm-221:/export1/BigData/current # hdfs dfs -ls / > 15/06/11 11:00:17 INFO hdfs.PeerCache: SocketCache disabled. > Found 9 items > -rw-r--r-- 3 hdfs supergroup 0 2015-06-08 12:10 > /PRE_CREATE_DIR.SUCCESS > drwxr-x--- - flume hadoop 0 2015-06-08 12:08 /flume > drwx-- - hbase hadoop 0 2015-06-10 15:25 /hbase > drwxr-xr-x - hdfs supergroup 0 2015-06-10 17:19 /hyt > drwxrwxrwx - mapred hadoop 0 2015-06-08 12:08 /mr-history > drwxr-xr-x - hdfs supergroup 0 2015-06-09 22:10 > /result_1433858936 > drwxrwxrwx - spark supergroup 0 2015-06-10 19:15 /sparkJobHistory > drwxrwxrwx - hdfs hadoop 0 2015-06-08 12:14 /tmp > drwxrwxrwx - hdfs hadoop 0 2015-06-09 21:57 /user > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count / > 15/06/11 11:00:24 INFO hdfs.PeerCache: SocketCache disabled. > 1043 171536 1756375688 / > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /PRE_CREATE_DIR.SUCCESS > 15/06/11 11:00:30 INFO hdfs.PeerCache: SocketCache disabled. >01 0 /PRE_CREATE_DIR.SUCCESS > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /flume > 15/06/11 11:00:41 INFO hdfs.PeerCache: SocketCache disabled. >10 0 /flume > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /hbase > 15/06/11 11:00:49 INFO hdfs.PeerCache: SocketCache disabled. > 36 18 14807 /hbase > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /hyt > 15/06/11 11:01:09 INFO hdfs.PeerCache: SocketCache disabled. >10 0 /hyt > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /mr-history > 15/06/11 11:01:18 INFO hdfs.PeerCache: SocketCache disabled. >30 0 /mr-history > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /result_1433858936 > 15/06/11 11:01:29 INFO hdfs.PeerCache: SocketCache disabled. > 1001 171517 1756360881 /result_1433858936 > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /sparkJobHistory > 15/06/11 11:01:41 INFO hdfs.PeerCache: SocketCache disabled. >13 21785 /sparkJobHistory > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /tmp > 15/06/11 11:01:48 INFO hdfs.PeerCache: SocketCache disabled. > 176 35958 /tmp > vm-221:/export1/BigData/current # > vm-221:/export1/BigData/current # hdfs dfs -count /user > 15/06/11 11:01:55 INFO hdfs.PeerCache: SocketCache disabled. > 121 19077 /user -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots
[ https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730814#comment-14730814 ] Jagadesh Kiran N commented on HDFS-8986: Thanks for your cliarification [~ggop] . [~szetszwo] & [~jingzhao] Please share your views about this improvement > Add option to -du to calculate directory space usage excluding snapshots > > > Key: HDFS-8986 > URL: https://issues.apache.org/jira/browse/HDFS-8986 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Reporter: Gautam Gopalakrishnan >Assignee: Jagadesh Kiran N > > When running {{hadoop fs -du}} on a snapshotted directory (or one of its > children), the report includes space consumed by blocks that are only present > in the snapshots. This is confusing for end users. > {noformat} > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -createSnapshot /tmp/parent snap1 > Created snapshot /tmp/parent/.snapshot/snap1 > $ hadoop fs -rm -skipTrash /tmp/parent/sub1/* > ... > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -deleteSnapshot /tmp/parent snap1 > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 0 0 /tmp/parent > 0 0 /tmp/parent/sub1 > {noformat} > It would be helpful if we had a flag, say -X, to exclude any snapshot related > disk usage in the output -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC
[ https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730826#comment-14730826 ] Tsz Wo Nicholas Sze commented on HDFS-9011: --- Patch looks good in general. Just some questions: Should we enforce block report index order, i.e. context.getCurRpc() == indexInLastBlockReport + 1? Also, do we need to handle out of order block report index? One of the rpc may be dropped and it is re-sent later. The block report rpc's may arrive out of order. > Support splitting BlockReport of a storage into multiple RPC > > > Key: HDFS-9011 > URL: https://issues.apache.org/jira/browse/HDFS-9011 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jing Zhao >Assignee: Jing Zhao > Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch > > > Currently if a DataNode has too many blocks (more than 1m by default), it > sends multiple RPC to the NameNode for the block report, each RPC contains > report for a single storage. However, in practice we've seen sometimes even a > single storage can contains large amount of blocks and the report even > exceeds the max RPC data length. It may be helpful to support sending > multiple RPC for the block report of a storage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots
[ https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730738#comment-14730738 ] Gautam Gopalakrishnan commented on HDFS-8986: - [~jagadesh.kiran] I was not referring to the {{.snapshot}} directory itself. Admins can turn snapshots on and off on a directory (or a parent) and a normal user's use of the {{-du}} command will vary depending on how many snapshots hold older blocks. To an end user the use of snapshots should be transparent, in this case it is not. Let's say home directories are under {{/user}} and you as the hdfs admin enable snapshots on {{/user}}. Jane Doe could figure out how much space she uses by running a {{-du}} on {{/user/jdoe}}. Now with snapshots enabled, this measurement is no longer useful. End users shouldn't care that snapshots are on or off, the {{-du}} command should work as it always has. Alternatively, users should have a tool that allows them to measure usage within a directory excluding snapshots. Does this help? > Add option to -du to calculate directory space usage excluding snapshots > > > Key: HDFS-8986 > URL: https://issues.apache.org/jira/browse/HDFS-8986 > Project: Hadoop HDFS > Issue Type: Improvement > Components: snapshots >Reporter: Gautam Gopalakrishnan >Assignee: Jagadesh Kiran N > > When running {{hadoop fs -du}} on a snapshotted directory (or one of its > children), the report includes space consumed by blocks that are only present > in the snapshots. This is confusing for end users. > {noformat} > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -createSnapshot /tmp/parent snap1 > Created snapshot /tmp/parent/.snapshot/snap1 > $ hadoop fs -rm -skipTrash /tmp/parent/sub1/* > ... > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 799.7 M 2.3 G /tmp/parent > 799.7 M 2.3 G /tmp/parent/sub1 > $ hdfs dfs -deleteSnapshot /tmp/parent snap1 > $ hadoop fs -du -h -s /tmp/parent /tmp/parent/* > 0 0 /tmp/parent > 0 0 /tmp/parent/sub1 > {noformat} > It would be helpful if we had a flag, say -X, to exclude any snapshot related > disk usage in the output -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9024) Deprecate TotalFiles metric
[ https://issues.apache.org/jira/browse/HDFS-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14730894#comment-14730894 ] Hadoop QA commented on HDFS-9024: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 23m 3s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:green}+1{color} | javac | 7m 51s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 7s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 24s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | site | 3m 2s | Site still builds. | | {color:red}-1{color} | checkstyle | 2m 26s | The applied patch generated 2 new checkstyle issues (total was 344, now 345). | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 28s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 4m 23s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | common tests | 23m 42s | Tests passed in hadoop-common. | | {color:red}-1{color} | hdfs tests | 188m 46s | Tests failed in hadoop-hdfs. | | | | 265m 50s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager | | | hadoop.hdfs.server.namenode.TestFileTruncate | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754173/HDFS-9024.001.patch | | Optional Tests | site javadoc javac unit findbugs checkstyle | | git revision | trunk / 40d222e | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/12305/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | hadoop-common test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12305/artifact/patchprocess/testrun_hadoop-common.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12305/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12305/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12305/console | This message was automatically generated. > Deprecate TotalFiles metric > --- > > Key: HDFS-9024 > URL: https://issues.apache.org/jira/browse/HDFS-9024 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA > Labels: metrics > Attachments: HDFS-9024.001.patch > > > There are two metrics (TotalFiles and FilesTotal) which are the same. In > HDFS-5165, we decided to remove TotalFiles but we need to deprecate the > metric before removing it. This issue is to deprecate the metric. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9025) fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HDFS-9025: Attachment: HDFS-9025.patch fix minor problems. > fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9025) fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HDFS-9025: Status: Patch Available (was: Open) > fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9025) fix compilation issues on arch linux
Owen O'Malley created HDFS-9025: --- Summary: fix compilation issues on arch linux Key: HDFS-9025 URL: https://issues.apache.org/jira/browse/HDFS-9025 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Owen O'Malley There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HDFS-9025) fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley reassigned HDFS-9025: --- Assignee: Owen O'Malley > fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9025) fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731116#comment-14731116 ] Haohui Mai commented on HDFS-9025: -- Thanks Owen for the patch. Instead of including pthread directly, it is better to include {{${CMAKE_THREAD_LIBS_INIT}}}. I uploaded a patch to reflect this trivial patch. > fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
[ https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9010: Attachment: HDFS-9010.004.patch I think the Jenkins is not stable. The v4 patch rebases from {{trunk}} branch to trigger the Jenkins again for the unit tests. > Replace NameNode.DEFAULT_PORT with > HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key > > > Key: HDFS-9010 > URL: https://issues.apache.org/jira/browse/HDFS-9010 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, > HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch > > > The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value. > This jira tracks the effort of replacing the {{NameNode.DEFAULT_PORT}} with > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark > the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731128#comment-14731128 ] Haohui Mai commented on HDFS-9012: -- +1 > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9026) Support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HDFS-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nemanja Matkovic updated HDFS-9026: --- Status: Patch Available (was: Open) > Support for include/exclude lists on IPv6 setup > --- > > Key: HDFS-9026 > URL: https://issues.apache.org/jira/browse/HDFS-9026 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic > Labels: ipv6 > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731160#comment-14731160 ] Hudson commented on HDFS-9012: -- FAILURE: Integrated in Hadoop-trunk-Commit #8404 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8404/]) HDFS-9012. Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module. Contributed by Mingliang Liu. (wheat9: rev d16c4eee186492608ffeb1c2e83f437000cc64f6) * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9025) fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731069#comment-14731069 ] Hadoop QA commented on HDFS-9025: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | patch | 0m 0s | The patch command could not apply the patch during dryrun. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754226/HDFS-9025.patch | | Optional Tests | javac unit | | git revision | trunk / 6eaca2e | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12307/console | This message was automatically generated. > fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8383) Tolerate multiple failures in DFSStripedOutputStream
[ https://issues.apache.org/jira/browse/HDFS-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walter Su updated HDFS-8383: Attachment: HDFS-8383.01.patch Thanks Zhe for your very good advices. I moved {{BlockRecoveryTrigger}} inside {{Coordinator}} as you said. I have a feel {{Coordinator}} can do other jobs, So I make {{Coordinator}} a service. I add the javadocs your posted and changed a little bit. Hope you are ok with that. bq. Why not incrementing numScheduled if it's already positive? I make it a {{boolean}}. The thing is, assume horizontal axis represents time, {noformat} streamer#1 failed --> scheduled ->recovery happening (would bump GS to 1002) streamer#2 failed --> scheduled > waiting streamer#3 failed --> scheduled --> waiting {noformat} the failures of #2,#3 can be processed together and we bump GS to 1003. We only need schedule once. > Tolerate multiple failures in DFSStripedOutputStream > > > Key: HDFS-8383 > URL: https://issues.apache.org/jira/browse/HDFS-8383 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo Nicholas Sze >Assignee: Walter Su > Attachments: HDFS-8383.00.patch, HDFS-8383.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9025) Fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731213#comment-14731213 ] Haohui Mai commented on HDFS-9025: -- The v2 patch pokes maven to get jenkins running. > Fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, > HDFS-9025.HDFS-8707.002.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9025) Fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai updated HDFS-9025: - Attachment: HDFS-9025.HDFS-8707.002.patch > Fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, > HDFS-9025.HDFS-8707.002.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai updated HDFS-8984: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I've committed the patch to trunk and branch-2. Thanks for the reviews. > Move replication queues related methods in FSNamesystem to BlockManager > --- > > Key: HDFS-8984 > URL: https://issues.apache.org/jira/browse/HDFS-8984 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, > HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch > > > Currently {{FSNamesystem}} controls whether replication queue should be > populated based on whether the NN is in safe mode or whether it is an active > NN. > Replication is a concept on the block management layer. It is more natural to > place the functionality in the {{BlockManager}} class. > This jira proposes to move the these methods to the {{BlockManager}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-8981: -- Resolution: Fixed Hadoop Flags: Incompatible change,Reviewed Fix Version/s: 3.0.0 Release Note: (was: getSoftwareVersion method would replace original getVersion method, which returns the version string. The new getVersion method would return both version string and revision string) Status: Resolved (was: Patch Available) I have committed it to trunk. Thanks [~l201514] for the contribution and [~sjlee0] and [~wheat9] for the review. > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Fix For: 3.0.0 > > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9022: Attachment: HDFS-9022.001.patch > Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client > -- > > Key: HDFS-9022 > URL: https://issues.apache.org/jira/browse/HDFS-9022 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch > > > The static helper methods in NameNodes are used in {{hdfs-client}} module. > For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes > which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should > keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module. > This jira tracks the effort of moving the following static helper methods out > of {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these > methods is the {{DFSUtilClient}} class: > {code} > public static InetSocketAddress getAddress(String address); > public static InetSocketAddress getAddress(Configuration conf); > public static InetSocketAddress getAddress(URI filesystemURI); > public static URI getUri(InetSocketAddress namenode); > {code} > Be cautious not to bring new checkstyle warnings. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7929) inotify unable fetch pre-upgrade edit log segments once upgrade starts
[ https://issues.apache.org/jira/browse/HDFS-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731193#comment-14731193 ] Zhe Zhang commented on HDFS-7929: - Good question Sangjin. HDFS-8846 is a test-only change and I think it's OK to leave it to the next release. > inotify unable fetch pre-upgrade edit log segments once upgrade starts > -- > > Key: HDFS-7929 > URL: https://issues.apache.org/jira/browse/HDFS-7929 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Labels: 2.6.1-candidate > Fix For: 2.7.0, 2.6.1 > > Attachments: HDFS-7929-000.patch, HDFS-7929-001.patch, > HDFS-7929-002.patch, HDFS-7929-003.patch > > > inotify is often used to periodically poll HDFS events. However, once an HDFS > upgrade has started, edit logs are moved to /previous on the NN, which is not > accessible. Moreover, once the upgrade is finalized /previous is currently > lost forever. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8128) hadoop-hdfs-client dependency convergence error
[ https://issues.apache.org/jira/browse/HDFS-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731241#comment-14731241 ] Haohui Mai commented on HDFS-8128: -- I'm unsure whether this is the right thing to do as hadoop-hdfs-client depends on hadoop-annotations. > hadoop-hdfs-client dependency convergence error > --- > > Key: HDFS-8128 > URL: https://issues.apache.org/jira/browse/HDFS-8128 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Tsz Wo Nicholas Sze >Assignee: Haohui Mai > > Found the following in > https://builds.apache.org/job/PreCommit-HDFS-Build/10258/consoleFull > {noformat} > [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence > failed with message: > Failed while enforcing releasability the error(s) are [ > Dependency convergence error for > org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT paths to dependency are: > +-org.apache.hadoop:hadoop-hdfs-client:3.0.0-SNAPSHOT > +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT > +-org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT > and > +-org.apache.hadoop:hadoop-hdfs-client:3.0.0-SNAPSHOT > +-org.apache.hadoop:hadoop-annotations:3.0.0-20150410.234534-6484 > ] > [INFO] > > [INFO] BUILD FAILURE > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731300#comment-14731300 ] Hudson commented on HDFS-8384: -- FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #353 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/353/]) HDFS-8384. Allow NN to startup if there are files having a lease but are not under construction. Contributed by Jing Zhao. (jing9: rev 8928729c80af0a154524e06fb13ed9b191986a78) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731303#comment-14731303 ] Hudson commented on HDFS-9012: -- FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #353 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/353/]) HDFS-9012. Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module. Contributed by Mingliang Liu. (wheat9: rev d16c4eee186492608ffeb1c2e83f437000cc64f6) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731301#comment-14731301 ] Hudson commented on HDFS-8981: -- FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #353 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/353/]) HDFS-8981. Adding revision to data node jmx getVersion() method. (Siqi Li via mingma) (mingma: rev 30db1adac31b07b34ce8e8d426cc139fb8cfad02) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Fix For: 3.0.0 > > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731302#comment-14731302 ] Hudson commented on HDFS-8984: -- FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #353 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/353/]) HDFS-8984. Move replication queues related methods in FSNamesystem to BlockManager. Contributed by Haohui Mai. (wheat9: rev 715b9c649982bff91d1f9eae656ba3b82178e1a3) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java > Move replication queues related methods in FSNamesystem to BlockManager > --- > > Key: HDFS-8984 > URL: https://issues.apache.org/jira/browse/HDFS-8984 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, > HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch > > > Currently {{FSNamesystem}} controls whether replication queue should be > populated based on whether the NN is in safe mode or whether it is an active > NN. > Replication is a concept on the block management layer. It is more natural to > place the functionality in the {{BlockManager}} class. > This jira proposes to move the these methods to the {{BlockManager}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9026) Support for include/exclude lists on IPv6 setup
Nemanja Matkovic created HDFS-9026: -- Summary: Support for include/exclude lists on IPv6 setup Key: HDFS-9026 URL: https://issues.apache.org/jira/browse/HDFS-9026 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Environment: This affects only IPv6 cluster setup Reporter: Nemanja Matkovic Assignee: Nemanja Matkovic This is a tracking item for having e2e IPv6 support in HDFS. Nate did great ground work in HDFS-8078 but for having whole feature working e2e this one of the items missing. Basically today NN won't be able to parse IPv6 addresses if they are present in include or exclude list. Patch has a dependency (and has been tested on IPv6 only cluster) on top of HDFS-8078.14.patch This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9025) fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai updated HDFS-9025: - Attachment: HDFS-9025.HDFS-8707.001.patch > fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9025) Fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai updated HDFS-9025: - Summary: Fix compilation issues on arch linux (was: fix compilation issues on arch linux) > Fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9026) Support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HDFS-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731175#comment-14731175 ] Hadoop QA commented on HDFS-9026: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 15m 48s | Findbugs (version ) appears to be broken on trunk. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:red}-1{color} | javac | 1m 40s | The patch appears to cause the build to fail. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754236/HDFS-9026-1.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / d16c4ee | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12310/console | This message was automatically generated. > Support for include/exclude lists on IPv6 setup > --- > > Key: HDFS-9026 > URL: https://issues.apache.org/jira/browse/HDFS-9026 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic > Labels: ipv6 > Attachments: HDFS-9026-1.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-8384: Fix Version/s: 2.8.0 > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-8384: Attachment: HDFS-8384-branch-2.7.patch HDFS-8384-branch-2.6.patch I've committed the patch to trunk and branch-2. The patches for 2.6.1 and 2.7.2 are also uploaded. [~szetszwo], could you please also take a look at them? > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731288#comment-14731288 ] Hudson commented on HDFS-9012: -- FAILURE: Integrated in Hadoop-Yarn-trunk #1084 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1084/]) HDFS-9012. Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module. Contributed by Mingliang Liu. (wheat9: rev d16c4eee186492608ffeb1c2e83f437000cc64f6) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9026) Support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HDFS-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-9026: --- Labels: ipv6 (was: ) > Support for include/exclude lists on IPv6 setup > --- > > Key: HDFS-9026 > URL: https://issues.apache.org/jira/browse/HDFS-9026 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic > Labels: ipv6 > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
[ https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731154#comment-14731154 ] Mingliang Liu commented on HDFS-9010: - OK. Glad to know. Thanks [~wheat9] > Replace NameNode.DEFAULT_PORT with > HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key > > > Key: HDFS-9010 > URL: https://issues.apache.org/jira/browse/HDFS-9010 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, > HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch > > > The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value. > This jira tracks the effort of replacing the {{NameNode.DEFAULT_PORT}} with > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark > the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731224#comment-14731224 ] Jing Zhao commented on HDFS-8384: - Thanks for the review, Nicholas! Looks like the failed tests are unrelated. I will commit the patch shortly. I will also post a patch for 2.7.2/2.6.1 (before HDFS-6757). > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Attachments: HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siqi Li updated HDFS-8981: -- Release Note: getSoftwareVersion method would replace original getVersion method, which returns the version string. The new getVersion method would return both version string and revision string > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-8384: Target Version/s: 2.6.1, 2.7.2 (was: 2.8.0) > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9025) Fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bob Hansen updated HDFS-9025: - Attachment: HDFS-9025.HDFS-8707.003.patch Updated the patch to catch when the dev is building from libhdfspp/, not native/ This can save several hours, as it turns out. > Fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, > HDFS-9025.HDFS-8707.002.patch, HDFS-9025.HDFS-8707.003.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731325#comment-14731325 ] Hudson commented on HDFS-8384: -- FAILURE: Integrated in Hadoop-trunk-Commit #8405 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8405/]) HDFS-8384. Allow NN to startup if there are files having a lease but are not under construction. Contributed by Jing Zhao. (jing9: rev 8928729c80af0a154524e06fb13ed9b191986a78) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731348#comment-14731348 ] Hadoop QA commented on HDFS-8384: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | patch | 0m 0s | The patch command could not apply the patch during dryrun. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754250/HDFS-8384-branch-2.7.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | branch-2 / 67bce1e | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12315/console | This message was automatically generated. > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9025) Fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731359#comment-14731359 ] Hadoop QA commented on HDFS-9025: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 15m 20s | Pre-patch HDFS-8707 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:red}-1{color} | javac | 1m 26s | The patch appears to cause the build to fail. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754254/HDFS-9025.HDFS-8707.003.patch | | Optional Tests | javadoc javac unit | | git revision | HDFS-8707 / 2a98ab5 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12314/console | This message was automatically generated. > Fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, > HDFS-9025.HDFS-8707.002.patch, HDFS-9025.HDFS-8707.003.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method
[ https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9027: Component/s: (was: build) > Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method > --- > > Key: HDFS-9027 > URL: https://issues.apache.org/jira/browse/HDFS-9027 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mingliang Liu >Assignee: Mingliang Liu > > In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} > class checks whether the HDFS file is lazy persist. It does two things: > 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which > builds an array of {{BlockStoragePolicy}} internally > 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by > policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}} > Code samples as following: > {code} > private static final BlockStoragePolicySuite blockStoragePolicySuite = > BlockStoragePolicySuite.createDefaultSuite(); > static boolean isLazyPersist(HdfsFileStatus stat) { > final BlockStoragePolicy p = blockStoragePolicySuite.getPolicy( > HdfsConstants.MEMORY_STORAGE_POLICY_NAME); > return p != null && stat.getStoragePolicy() == p.getId(); > } > {code} > This has two side effects: > 1. Takes time to iterate the pre-built block storage policy array in order to > find the _same_ policy every time whose id matters only (as we need to > compare the file status policy id with lazy persist policy id) > 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former > should be moved to {{hadoop-hdfs-client}} module, while the latter can stay > in {{hadoop-hdfs}} module. > Actually, we have the block storage policy IDs, which can be used to compare > with HDFS file status' policy id, as following: > {code} > static boolean isLazyPersist(HdfsFileStatus stat) { > return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID; > } > {code} > This way, we only need to move the block storage policies' IDs from > {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} > ({{hadoop-hdfs-client}} module). > Another reason we should move those block storage policy IDs is that the > block storage policy names were moved to {{HdfsConstants}} already. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731457#comment-14731457 ] Hudson commented on HDFS-8984: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2274 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2274/]) HDFS-8984. Move replication queues related methods in FSNamesystem to BlockManager. Contributed by Haohui Mai. (wheat9: rev 715b9c649982bff91d1f9eae656ba3b82178e1a3) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java > Move replication queues related methods in FSNamesystem to BlockManager > --- > > Key: HDFS-8984 > URL: https://issues.apache.org/jira/browse/HDFS-8984 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, > HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch > > > Currently {{FSNamesystem}} controls whether replication queue should be > populated based on whether the NN is in safe mode or whether it is an active > NN. > Replication is a concept on the block management layer. It is more natural to > place the functionality in the {{BlockManager}} class. > This jira proposes to move the these methods to the {{BlockManager}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731456#comment-14731456 ] Hudson commented on HDFS-8981: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2274 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2274/]) HDFS-8981. Adding revision to data node jmx getVersion() method. (Siqi Li via mingma) (mingma: rev 30db1adac31b07b34ce8e8d426cc139fb8cfad02) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Fix For: 3.0.0 > > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731473#comment-14731473 ] Hudson commented on HDFS-8981: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #1085 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1085/]) HDFS-8981. Adding revision to data node jmx getVersion() method. (Siqi Li via mingma) (mingma: rev 30db1adac31b07b34ce8e8d426cc139fb8cfad02) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Fix For: 3.0.0 > > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731472#comment-14731472 ] Hudson commented on HDFS-8384: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #1085 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1085/]) HDFS-8384. Allow NN to startup if there are files having a lease but are not under construction. Contributed by Jing Zhao. (jing9: rev 8928729c80af0a154524e06fb13ed9b191986a78) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731474#comment-14731474 ] Hudson commented on HDFS-8984: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #1085 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/1085/]) HDFS-8984. Move replication queues related methods in FSNamesystem to BlockManager. Contributed by Haohui Mai. (wheat9: rev 715b9c649982bff91d1f9eae656ba3b82178e1a3) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java > Move replication queues related methods in FSNamesystem to BlockManager > --- > > Key: HDFS-8984 > URL: https://issues.apache.org/jira/browse/HDFS-8984 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, > HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch > > > Currently {{FSNamesystem}} controls whether replication queue should be > populated based on whether the NN is in safe mode or whether it is an active > NN. > Replication is a concept on the block management layer. It is more natural to > place the functionality in the {{BlockManager}} class. > This jira proposes to move the these methods to the {{BlockManager}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731326#comment-14731326 ] Hudson commented on HDFS-8981: -- FAILURE: Integrated in Hadoop-trunk-Commit #8405 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8405/]) HDFS-8981. Adding revision to data node jmx getVersion() method. (Siqi Li via mingma) (mingma: rev 30db1adac31b07b34ce8e8d426cc139fb8cfad02) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Fix For: 3.0.0 > > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731327#comment-14731327 ] Hudson commented on HDFS-8984: -- FAILURE: Integrated in Hadoop-trunk-Commit #8405 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/8405/]) HDFS-8984. Move replication queues related methods in FSNamesystem to BlockManager. Contributed by Haohui Mai. (wheat9: rev 715b9c649982bff91d1f9eae656ba3b82178e1a3) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java > Move replication queues related methods in FSNamesystem to BlockManager > --- > > Key: HDFS-8984 > URL: https://issues.apache.org/jira/browse/HDFS-8984 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, > HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch > > > Currently {{FSNamesystem}} controls whether replication queue should be > populated based on whether the NN is in safe mode or whether it is an active > NN. > Replication is a concept on the block management layer. It is more natural to > place the functionality in the {{BlockManager}} class. > This jira proposes to move the these methods to the {{BlockManager}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731386#comment-14731386 ] Hudson commented on HDFS-8384: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #347 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/347/]) HDFS-8384. Allow NN to startup if there are files having a lease but are not under construction. Contributed by Jing Zhao. (jing9: rev 8928729c80af0a154524e06fb13ed9b191986a78) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731387#comment-14731387 ] Hudson commented on HDFS-8984: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #347 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/347/]) HDFS-8984. Move replication queues related methods in FSNamesystem to BlockManager. Contributed by Haohui Mai. (wheat9: rev 715b9c649982bff91d1f9eae656ba3b82178e1a3) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java > Move replication queues related methods in FSNamesystem to BlockManager > --- > > Key: HDFS-8984 > URL: https://issues.apache.org/jira/browse/HDFS-8984 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, > HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch > > > Currently {{FSNamesystem}} controls whether replication queue should be > populated based on whether the NN is in safe mode or whether it is an active > NN. > Replication is a concept on the block management layer. It is more natural to > place the functionality in the {{BlockManager}} class. > This jira proposes to move the these methods to the {{BlockManager}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731388#comment-14731388 ] Hudson commented on HDFS-9012: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #347 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/347/]) HDFS-9012. Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module. Contributed by Mingliang Liu. (wheat9: rev d16c4eee186492608ffeb1c2e83f437000cc64f6) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731410#comment-14731410 ] Hudson commented on HDFS-8384: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2296 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2296/]) HDFS-8384. Allow NN to startup if there are files having a lease but are not under construction. Contributed by Jing Zhao. (jing9: rev 8928729c80af0a154524e06fb13ed9b191986a78) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8984) Move replication queues related methods in FSNamesystem to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731411#comment-14731411 ] Hudson commented on HDFS-8984: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2296 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2296/]) HDFS-8984. Move replication queues related methods in FSNamesystem to BlockManager. Contributed by Haohui Mai. (wheat9: rev 715b9c649982bff91d1f9eae656ba3b82178e1a3) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeMode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Namesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java > Move replication queues related methods in FSNamesystem to BlockManager > --- > > Key: HDFS-8984 > URL: https://issues.apache.org/jira/browse/HDFS-8984 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Fix For: 2.8.0 > > Attachments: HDFS-8984.000.patch, HDFS-8984.001.patch, > HDFS-8984.002.patch, HDFS-8984.003.patch, HDFS-8984.004.patch > > > Currently {{FSNamesystem}} controls whether replication queue should be > populated based on whether the NN is in safe mode or whether it is an active > NN. > Replication is a concept on the block management layer. It is more natural to > place the functionality in the {{BlockManager}} class. > This jira proposes to move the these methods to the {{BlockManager}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731412#comment-14731412 ] Hudson commented on HDFS-9012: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2296 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2296/]) HDFS-9012. Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module. Contributed by Mingliang Liu. (wheat9: rev d16c4eee186492608ffeb1c2e83f437000cc64f6) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC
[ https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9011: Attachment: HDFS-9011.002.patch Thanks for the review, Nicholas! Update the patch to fix the failed unit tests. About the block report order, current the DN sends block report RPCs in a loop, thus looks like it is not possible that the NN gets out of order reports. {code} for (int r = 0; r < reports.size(); r++) { StorageBlockReport singleReport[] = { reports.get(r) }; DatanodeCommand cmd = bpNamenode.blockReport( bpRegistration, bpos.getBlockPoolId(), singleReport, new BlockReportContext(reports.size(), r, reportId, fullBrLeaseId)); numReportsSent++; numRPCs++; if (cmd != null) { cmds.add(cmd); } } {code} > Support splitting BlockReport of a storage into multiple RPC > > > Key: HDFS-9011 > URL: https://issues.apache.org/jira/browse/HDFS-9011 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jing Zhao >Assignee: Jing Zhao > Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, > HDFS-9011.002.patch > > > Currently if a DataNode has too many blocks (more than 1m by default), it > sends multiple RPC to the NameNode for the block report, each RPC contains > report for a single storage. However, in practice we've seen sometimes even a > single storage can contains large amount of blocks and the report even > exceeds the max RPC data length. It may be helpful to support sending > multiple RPC for the block report of a storage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9025) Fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731317#comment-14731317 ] Hadoop QA commented on HDFS-9025: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 15m 32s | Pre-patch HDFS-8707 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:red}-1{color} | javac | 1m 23s | The patch appears to cause the build to fail. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754254/HDFS-9025.HDFS-8707.003.patch | | Optional Tests | javadoc javac unit | | git revision | HDFS-8707 / 2a98ab5 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12312/console | This message was automatically generated. > Fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, > HDFS-9025.HDFS-8707.002.patch, HDFS-9025.HDFS-8707.003.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8967) Create a BlockManagerLock class to represent the lock used in the BlockManager
[ https://issues.apache.org/jira/browse/HDFS-8967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731369#comment-14731369 ] Jing Zhao commented on HDFS-8967: - I think the patch should work since we only provide a wrapper for the fsnamesystem lock. But I'm not sure if the lock semantic is correct: for some functions in BlockManager we may have to hold the NS lock based on the current implementation. So maybe we should continue this work in a feature branch. > Create a BlockManagerLock class to represent the lock used in the BlockManager > -- > > Key: HDFS-8967 > URL: https://issues.apache.org/jira/browse/HDFS-8967 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Haohui Mai >Assignee: Haohui Mai > Attachments: HDFS-8967.000.patch, HDFS-8967.001.patch, > HDFS-8967.002.patch > > > This jira proposes to create a {{BlockManagerLock}} class to represent the > lock used in {{BlockManager}}. > Currently it directly points to the {{FSNamesystem}} lock thus there are no > functionality changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method
Mingliang Liu created HDFS-9027: --- Summary: Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method Key: HDFS-9027 URL: https://issues.apache.org/jira/browse/HDFS-9027 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Mingliang Liu Assignee: Mingliang Liu In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} class checks whether the HDFS file is lazy persist. It does two things: 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which builds an array of {{BlockStoragePolicy}} internally 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}} Code samples as following: {code} private static final BlockStoragePolicySuite blockStoragePolicySuite = BlockStoragePolicySuite.createDefaultSuite(); static boolean isLazyPersist(HdfsFileStatus stat) { final BlockStoragePolicy p = blockStoragePolicySuite.getPolicy( HdfsConstants.MEMORY_STORAGE_POLICY_NAME); return p != null && stat.getStoragePolicy() == p.getId(); } {code} This has two side effects: 1. Takes time to iterate the pre-built block storage policy array in order to find the _same_ policy every time whose id matters only (as we need to compare the file status policy id with lazy persist policy id) 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former should be moved to {{hadoop-hdfs-client}} module, while the latter can stay in {{hadoop-hdfs}} module. Actually, we have the block storage policy IDs, which can be used to compare with HDFS file status' policy id, as following: {code} static boolean isLazyPersist(HdfsFileStatus stat) { return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID; } {code} This way, we only need to move the block storage policies' IDs from {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} ({{hadoop-hdfs-client}} module). Another reason we should move those block storage policy IDs is that the block storage policy names were moved to {{HdfsConstants}} already. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method
[ https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9027: Status: Patch Available (was: Open) > Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method > --- > > Key: HDFS-9027 > URL: https://issues.apache.org/jira/browse/HDFS-9027 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9027.000.patch > > > In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} > class checks whether the HDFS file is lazy persist. It does two things: > 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which > builds an array of {{BlockStoragePolicy}} internally > 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by > policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}} > Code samples as following: > {code} > private static final BlockStoragePolicySuite blockStoragePolicySuite = > BlockStoragePolicySuite.createDefaultSuite(); > static boolean isLazyPersist(HdfsFileStatus stat) { > final BlockStoragePolicy p = blockStoragePolicySuite.getPolicy( > HdfsConstants.MEMORY_STORAGE_POLICY_NAME); > return p != null && stat.getStoragePolicy() == p.getId(); > } > {code} > This has two side effects: > 1. Takes time to iterate the pre-built block storage policy array in order to > find the _same_ policy every time whose id matters only (as we need to > compare the file status policy id with lazy persist policy id) > 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former > should be moved to {{hadoop-hdfs-client}} module, while the latter can stay > in {{hadoop-hdfs}} module. > Actually, we have the block storage policy IDs, which can be used to compare > with HDFS file status' policy id, as following: > {code} > static boolean isLazyPersist(HdfsFileStatus stat) { > return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID; > } > {code} > This way, we only need to move the block storage policies' IDs from > {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} > ({{hadoop-hdfs-client}} module). > Another reason we should move those block storage policy IDs is that the > block storage policy names were moved to {{HdfsConstants}} already. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-8981: -- Release Note: getSoftwareVersion method would replace original getVersion method, which returns the version string. The new getVersion method would return both version string and revision string. > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Fix For: 3.0.0 > > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
[ https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731337#comment-14731337 ] Hadoop QA commented on HDFS-9022: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 20m 57s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 13 new or modified test files. | | {color:green}+1{color} | javac | 7m 54s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 12s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 3m 22s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 3s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 41s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 6m 6s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 13s | Pre-build of native portion | | {color:red}-1{color} | mapreduce tests | 80m 51s | Tests failed in hadoop-mapreduce-client-jobclient. | | {color:red}-1{color} | hdfs tests | 0m 31s | Tests failed in hadoop-hdfs. | | {color:green}+1{color} | hdfs tests | 0m 32s | Tests passed in hadoop-hdfs-client. | | {color:red}-1{color} | hdfs tests | 1m 7s | Tests failed in hadoop-hdfs-nfs. | | | | 137m 28s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.mapreduce.v2.TestMRJobs | | | hadoop.mapreduce.security.TestMRCredentials | | | hadoop.hdfs.nfs.nfs3.TestWrites | | | hadoop.hdfs.nfs.TestMountd | | | hadoop.hdfs.nfs.nfs3.TestExportsTable | | | hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3 | | | hadoop.hdfs.nfs.nfs3.TestReaddir | | | hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege | | | hadoop.hdfs.nfs.nfs3.TestNfs3HttpServer | | Timed out tests | org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution | | | org.apache.hadoop.mapreduce.TestMapReduceLazyOutput | | Failed build | hadoop-hdfs | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754230/HDFS-9022.001.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 6eaca2e | | hadoop-mapreduce-client-jobclient test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12309/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12309/artifact/patchprocess/testrun_hadoop-hdfs.txt | | hadoop-hdfs-client test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12309/artifact/patchprocess/testrun_hadoop-hdfs-client.txt | | hadoop-hdfs-nfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12309/artifact/patchprocess/testrun_hadoop-hdfs-nfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12309/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12309/console | This message was automatically generated. > Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client > -- > > Key: HDFS-9022 > URL: https://issues.apache.org/jira/browse/HDFS-9022 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client, namenode >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch > > > The static helper methods in NameNodes are used in {{hdfs-client}} module. > For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes > which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should > keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module. > This jira tracks the effort of moving the following static helper methods out > of {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these > methods is the {{DFSUtilClient}} class: > {code} > public static InetSocketAddress getAddress(String address); > public static InetSocketAddress getAddress(Configuration conf); > public static InetSocketAddress getAddress(URI filesystemURI); > public static URI getUri(InetSocketAddress namenode); >
[jira] [Updated] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method
[ https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-9027: Attachment: HDFS-9027.000.patch > Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method > --- > > Key: HDFS-9027 > URL: https://issues.apache.org/jira/browse/HDFS-9027 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9027.000.patch > > > In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} > class checks whether the HDFS file is lazy persist. It does two things: > 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which > builds an array of {{BlockStoragePolicy}} internally > 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by > policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}} > Code samples as following: > {code} > private static final BlockStoragePolicySuite blockStoragePolicySuite = > BlockStoragePolicySuite.createDefaultSuite(); > static boolean isLazyPersist(HdfsFileStatus stat) { > final BlockStoragePolicy p = blockStoragePolicySuite.getPolicy( > HdfsConstants.MEMORY_STORAGE_POLICY_NAME); > return p != null && stat.getStoragePolicy() == p.getId(); > } > {code} > This has two side effects: > 1. Takes time to iterate the pre-built block storage policy array in order to > find the _same_ policy every time whose id matters only (as we need to > compare the file status policy id with lazy persist policy id) > 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former > should be moved to {{hadoop-hdfs-client}} module, while the latter can stay > in {{hadoop-hdfs}} module. > Actually, we have the block storage policy IDs, which can be used to compare > with HDFS file status' policy id, as following: > {code} > static boolean isLazyPersist(HdfsFileStatus stat) { > return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID; > } > {code} > This way, we only need to move the block storage policies' IDs from > {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} > ({{hadoop-hdfs-client}} module). > Another reason we should move those block storage policy IDs is that the > block storage policy names were moved to {{HdfsConstants}} already. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731458#comment-14731458 ] Hudson commented on HDFS-9012: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2274 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2274/]) HDFS-9012. Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module. Contributed by Mingliang Liu. (wheat9: rev d16c4eee186492608ffeb1c2e83f437000cc64f6) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java * hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731455#comment-14731455 ] Hudson commented on HDFS-8384: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2274 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2274/]) HDFS-8384. Allow NN to startup if there are files having a lease but are not under construction. Contributed by Jing Zhao. (jing9: rev 8928729c80af0a154524e06fb13ed9b191986a78) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC
[ https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731799#comment-14731799 ] Tsz Wo Nicholas Sze commented on HDFS-9011: --- It seems there is a bug: for each partial report rpc, NN calls reportDiff(..) but reportDiff(..) assumes full block report. I think the diff is incorrect for a partial report. In particular, the toRemove set may contain some blocks reported by other rpcs. > Support splitting BlockReport of a storage into multiple RPC > > > Key: HDFS-9011 > URL: https://issues.apache.org/jira/browse/HDFS-9011 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jing Zhao >Assignee: Jing Zhao > Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, > HDFS-9011.002.patch > > > Currently if a DataNode has too many blocks (more than 1m by default), it > sends multiple RPC to the NameNode for the block report, each RPC contains > report for a single storage. However, in practice we've seen sometimes even a > single storage can contains large amount of blocks and the report even > exceeds the max RPC data length. It may be helpful to support sending > multiple RPC for the block report of a storage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8384) Allow NN to startup if there are files having a lease but are not under construction
[ https://issues.apache.org/jira/browse/HDFS-8384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731796#comment-14731796 ] Tsz Wo Nicholas Sze commented on HDFS-8384: --- +1 the 2.6 and 2.7 patches look good. > Allow NN to startup if there are files having a lease but are not under > construction > > > Key: HDFS-8384 > URL: https://issues.apache.org/jira/browse/HDFS-8384 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Tsz Wo Nicholas Sze >Assignee: Jing Zhao >Priority: Minor > Fix For: 2.8.0 > > Attachments: HDFS-8384-branch-2.6.patch, HDFS-8384-branch-2.7.patch, > HDFS-8384.000.patch > > > When there are files having a lease but are not under construction, NN will > fail to start up with > {code} > 15/05/12 00:36:31 ERROR namenode.FSImage: Unable to save image for > /hadoop/hdfs/namenode > java.lang.IllegalStateException > at > com.google.common.base.Preconditions.checkState(Preconditions.java:129) > at > org.apache.hadoop.hdfs.server.namenode.LeaseManager.getINodesUnderConstruction(LeaseManager.java:412) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesUnderConstruction(FSNamesystem.java:7124) > ... > {code} > The actually problem is that the image could be corrupted by bugs like > HDFS-7587. We should have an option/conf to allow NN to start up so that the > problematic files could possibly be deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9019) sticky bit permission denied error not informative enough
[ https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-9019: -- Priority: Minor (was: Major) Hadoop Flags: Reviewed Issue Type: Improvement (was: Bug) +1 the new patch looks good. Thanks. > sticky bit permission denied error not informative enough > - > > Key: HDFS-9019 > URL: https://issues.apache.org/jira/browse/HDFS-9019 > Project: Hadoop HDFS > Issue Type: Improvement > Components: security >Affects Versions: 2.6.0, 2.7.0, 2.7.1 >Reporter: Thejas M Nair >Assignee: Xiaoyu Yao >Priority: Minor > Labels: easyfix, newbie > Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch > > > The check for sticky bit permission in FSPermissionChecker.java prints only > the child file name and the current owner. > It does not print the owner of the file and the parent directory. It would > help to have that printed as well for ease of debugging permission issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9026) Support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HDFS-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nemanja Matkovic updated HDFS-9026: --- Attachment: HDFS-9026-1.patch Patch for this issue, stacked on top of above mentioned HDFS-8078 > Support for include/exclude lists on IPv6 setup > --- > > Key: HDFS-9026 > URL: https://issues.apache.org/jira/browse/HDFS-9026 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic > Labels: ipv6 > Attachments: HDFS-9026-1.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9012) Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client module
[ https://issues.apache.org/jira/browse/HDFS-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai updated HDFS-9012: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the contribution. > Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class to hadoop-hdfs-client > module > > > Key: HDFS-9012 > URL: https://issues.apache.org/jira/browse/HDFS-9012 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HDFS-9012.000.patch, HDFS-9012.001.patch, > HDFS-9012.002.patch > > > The {{package org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck}} > class is used in client module classes (e.g. > {{DataStreamer$ResponseProcessor}} in {{DFSInputStream}} and > {{DFSOutputStream}}). This jira tracks the effort of moving this class to > {{hadoop-hdfs-client}} module. > We should keep the static attribute {{OOB_TIMEOUT}} and helper method > {{getOOBTimeout}} in the {{hadoop-hdfs}} module as they're not used (by now) > in {{hadoop-hdfs-client}} module. Meanwhile, we don't create the > {{HdfsConfiguration}} statically if we can pass the correct {{conf}} object. > The checkstyle warnings can be addressed in > [HDFS-8979|https://issues.apache.org/jira/browse/HDFS-8979]. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9025) fix compilation issues on arch linux
[ https://issues.apache.org/jira/browse/HDFS-9025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731143#comment-14731143 ] Hadoop QA commented on HDFS-9025: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 5m 29s | Pre-patch HDFS-8707 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | | {color:red}-1{color} | javac | 1m 23s | The patch appears to cause the build to fail. | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754231/HDFS-9025.HDFS-8707.001.patch | | Optional Tests | javac unit | | git revision | HDFS-8707 / 2a98ab5 | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12308/console | This message was automatically generated. > fix compilation issues on arch linux > > > Key: HDFS-9025 > URL: https://issues.apache.org/jira/browse/HDFS-9025 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs-client >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HDFS-9025.HDFS-8707.001.patch, HDFS-9025.patch > > > There are several compilation issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
[ https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731145#comment-14731145 ] Haohui Mai commented on HDFS-9010: -- Please do not submit the same patch for multiple times. You can retrigger jenkins by canceling the patch and resubmitting it. > Replace NameNode.DEFAULT_PORT with > HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key > > > Key: HDFS-9010 > URL: https://issues.apache.org/jira/browse/HDFS-9010 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, > HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch > > > The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value. > This jira tracks the effort of replacing the {{NameNode.DEFAULT_PORT}} with > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark > the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9026) Support for include/exclude lists on IPv6 setup
[ https://issues.apache.org/jira/browse/HDFS-9026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731147#comment-14731147 ] Nemanja Matkovic commented on HDFS-9026: Tagging [~nkedel] and [~eclark] as we're working together at uber jira > Support for include/exclude lists on IPv6 setup > --- > > Key: HDFS-9026 > URL: https://issues.apache.org/jira/browse/HDFS-9026 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Environment: This affects only IPv6 cluster setup >Reporter: Nemanja Matkovic >Assignee: Nemanja Matkovic > Labels: ipv6 > Attachments: HDFS-9026-1.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This is a tracking item for having e2e IPv6 support in HDFS. > Nate did great ground work in HDFS-8078 but for having whole feature working > e2e this one of the items missing. > Basically today NN won't be able to parse IPv6 addresses if they are present > in include or exclude list. > Patch has a dependency (and has been tested on IPv6 only cluster) on top of > HDFS-8078.14.patch > This should be committed to HADOOP-11890 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9018) Update the pom to add junit dependency and move TestXAttr to client project
[ https://issues.apache.org/jira/browse/HDFS-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731148#comment-14731148 ] Haohui Mai commented on HDFS-9018: -- Yes. Pulling in some client-based related unit tests would be very helpful. > Update the pom to add junit dependency and move TestXAttr to client project > --- > > Key: HDFS-9018 > URL: https://issues.apache.org/jira/browse/HDFS-9018 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Kanaka Kumar Avvaru >Assignee: Kanaka Kumar Avvaru > Attachments: HDFS-9018.patch > > > Update the pom to add junit dependency and move > {{org.apache.hadoop.fs.TestXAttr}} to client project to start with test > movement -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9019) sticky bit permission denied error not informative enough
[ https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-9019: - Attachment: HDFS-9019.001.patch Thanks [~szetszwo] for reviewing. Patch v001 addresses your comments with unit tests. Sample exception message from unit test output: {code} 2015-09-04 11:16:49,111 [IPC Server handler 6 on 57099] INFO ipc.Server (Server.java:run(2244)) - IPC Server handler 6 on 57099, call org.apache.hadoop.hdfs.protocol.ClientProtocol.delete from 127.0.0.1:57122 Call#65 Retry#0: org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit: user=rose, path="/tennant/contemporary/foo":theDoctor:supergroup:-rw-r--r--, parent="/tennant/contemporary":xyao:supergroup:drwxrwxrwt {code} > sticky bit permission denied error not informative enough > - > > Key: HDFS-9019 > URL: https://issues.apache.org/jira/browse/HDFS-9019 > Project: Hadoop HDFS > Issue Type: Bug > Components: security >Affects Versions: 2.6.0, 2.7.0, 2.7.1 >Reporter: Thejas M Nair >Assignee: Xiaoyu Yao > Labels: easyfix, newbie > Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch > > > The check for sticky bit permission in FSPermissionChecker.java prints only > the child file name and the current owner. > It does not print the owner of the file and the parent directory. It would > help to have that printed as well for ease of debugging permission issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
[ https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731496#comment-14731496 ] Hadoop QA commented on HDFS-9010: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 18m 1s | Findbugs (version 3.0.0) appears to be broken on trunk. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 6 new or modified test files. | | {color:green}+1{color} | javac | 9m 39s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 11m 44s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 26s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 1m 15s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 1s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 46s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 39s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 47s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 39s | Pre-build of native portion | | {color:red}-1{color} | mapreduce tests | 76m 51s | Tests failed in hadoop-mapreduce-client-jobclient. | | {color:red}-1{color} | hdfs tests | 116m 34s | Tests failed in hadoop-hdfs. | | | | 244m 27s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.mapreduce.lib.chain.TestChainErrors | | | hadoop.mapreduce.lib.input.TestLineRecordReaderJobs | | | hadoop.mapreduce.lib.map.TestMultithreadedMapper | | | hadoop.mapreduce.security.TestMRCredentials | | | hadoop.mapreduce.lib.output.TestMRCJCFileOutputCommitter | | | hadoop.mapreduce.lib.output.TestMRMultipleOutputs | | | hadoop.mapreduce.lib.output.TestMRSequenceFileAsBinaryOutputFormat | | | hadoop.conf.TestNoDefaultsJobConf | | | hadoop.mapreduce.v2.TestUberAM | | | hadoop.mapreduce.lib.join.TestJoinDatamerge | | | hadoop.mapreduce.lib.input.TestMultipleInputs | | | hadoop.mapreduce.TestMapReduce | | | hadoop.mapreduce.lib.aggregate.TestMapReduceAggregates | | | hadoop.mapreduce.lib.input.TestMRSequenceFileAsTextInputFormat | | | hadoop.mapreduce.TestMapReduceLazyOutput | | | hadoop.mapreduce.lib.output.TestJobOutputCommitter | | | hadoop.mapreduce.v2.TestMROldApiJobs | | | hadoop.mapreduce.lib.jobcontrol.TestMapReduceJobControlWithMocks | | | hadoop.mapreduce.lib.db.TestDataDrivenDBInputFormat | | | hadoop.mapreduce.TestMRJobClient | | | hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities | | | hadoop.mapreduce.lib.chain.TestMapReduceChain | | | hadoop.mapreduce.v2.TestMRJobsWithHistoryService | | | hadoop.mapreduce.TestNewCombinerGrouping | | | hadoop.mapreduce.lib.jobcontrol.TestMapReduceJobControl | | | hadoop.mapreduce.lib.input.TestMRCJCFileInputFormat | | | hadoop.mapreduce.TestLocalRunner | | | hadoop.mapreduce.lib.input.TestNLineInputFormat | | | hadoop.mapreduce.lib.input.TestMRSequenceFileAsBinaryInputFormat | | | hadoop.mapreduce.lib.partition.TestMRKeyFieldBasedComparator | | | hadoop.mapreduce.TestChild | | | hadoop.mapreduce.lib.chain.TestSingleElementChain | | | hadoop.mapreduce.security.TestBinaryTokenFile | | | hadoop.mapreduce.TestLargeSort | | | hadoop.mapreduce.lib.input.TestCombineFileInputFormat | | | hadoop.mapreduce.security.TestJHSSecurity | | | hadoop.mapreduce.lib.fieldsel.TestMRFieldSelection | | | hadoop.mapreduce.v2.TestMRAppWithCombiner | | | hadoop.mapreduce.lib.input.TestMRSequenceFileInputFilter | | | hadoop.mapreduce.TestMROutputFormat | | | hadoop.mapreduce.TestMapperReducerCleanup | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement | | | hadoop.hdfs.web.TestWebHDFSOAuth2 | | Timed out tests | org.apache.hadoop.mapreduce.v2.TestMRJobsWithProfiler | | | org.apache.hadoop.mapreduce.v2.TestMRJobs | | | org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle | | | org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | org.apache.hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider | | | org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog | | | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754235/HDFS-9010.004.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / d16c4ee | | hadoop-mapreduce-client-jobclient test log |
[jira] [Commented] (HDFS-9027) Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method
[ https://issues.apache.org/jira/browse/HDFS-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731500#comment-14731500 ] Hadoop QA commented on HDFS-9027: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 17m 53s | Findbugs (version 3.0.0) appears to be broken on trunk. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 1s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 52s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 3s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 24s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 2m 14s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 36s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 35s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 4m 25s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 16s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 0m 22s | Tests failed in hadoop-hdfs. | | {color:green}+1{color} | hdfs tests | 0m 27s | Tests passed in hadoop-hdfs-client. | | | | 49m 11s | | \\ \\ || Reason || Tests || | Failed build | hadoop-hdfs | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754265/HDFS-9027.000.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / e1feaf6 | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12317/artifact/patchprocess/testrun_hadoop-hdfs.txt | | hadoop-hdfs-client test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12317/artifact/patchprocess/testrun_hadoop-hdfs-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12317/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12317/console | This message was automatically generated. > Refactor o.a.h.hdfs.DataStreamer$isLazyPersist() method > --- > > Key: HDFS-9027 > URL: https://issues.apache.org/jira/browse/HDFS-9027 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9027.000.patch > > > In method {{isLazyPersist()}}, the {{org.apache.hadoop.hdfs.DataStreamer}} > class checks whether the HDFS file is lazy persist. It does two things: > 1. Create a class-wide _static_ {{BlockStoragePolicySuite}} object, which > builds an array of {{BlockStoragePolicy}} internally > 2. Get a block storage policy object from the {{blockStoragePolicySuite}} by > policy name {{HdfsConstants.MEMORY_STORAGE_POLICY_NAME}} > This has two side effects: > 1. Takes time to iterate the pre-built block storage policy array in order to > find the _same_ policy every time whose id matters only (as we need to > compare the file status policy id with lazy persist policy id) > 2. {{DataStreamer}} class imports {{BlockStoragePolicySuite}}. The former > should be moved to {{hadoop-hdfs-client}} module, while the latter can stay > in {{hadoop-hdfs}} module. > Actually, we have the block storage policy IDs, which can be used to compare > with HDFS file status' policy id, as following: > {code} > static boolean isLazyPersist(HdfsFileStatus stat) { > return stat.getStoragePolicy() == HdfsConstants.MEMORY_STORAGE_POLICY_ID; > } > {code} > This way, we only need to move the block storage policies' IDs from > {{HdfsServerConstant}} ({{hadoop-hdfs}} module) to {{HdfsConstants}} > ({{hadoop-hdfs-client}} module). > Another reason we should move those block storage policy IDs is that the > block storage policy names were moved to {{HdfsConstants}} already. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8981) Adding revision to data node jmx getVersion() method
[ https://issues.apache.org/jira/browse/HDFS-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731628#comment-14731628 ] Hudson commented on HDFS-8981: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2297 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2297/]) HDFS-8981. Adding revision to data node jmx getVersion() method. (Siqi Li via mingma) (mingma: rev 30db1adac31b07b34ce8e8d426cc139fb8cfad02) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Adding revision to data node jmx getVersion() method > > > Key: HDFS-8981 > URL: https://issues.apache.org/jira/browse/HDFS-8981 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Siqi Li >Assignee: Siqi Li >Priority: Minor > Fix For: 3.0.0 > > Attachments: HDFS-8981.v1.patch, HDFS-8981.v2.patch, > HDFS-8981.v3.patch, HDFS-8981.v4.patch > > > to be consistent with namenode jmx, datanode jmx should also output revision > number -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8383) Tolerate multiple failures in DFSStripedOutputStream
[ https://issues.apache.org/jira/browse/HDFS-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731630#comment-14731630 ] Jing Zhao commented on HDFS-8383: - Thanks for working on this, Walter! So could you please elaborate more about how the current patch handles multiple failures? It will be helpful if you can describe what failure scenarios can be tolerated and how they are handled. For example, can we handle the scenario where a streamer cannot successfully create a new block outputstream (and bump the GS) during the recovery? Quickly checking the patch I did not see where a new recovery is scheduled in this case. > Tolerate multiple failures in DFSStripedOutputStream > > > Key: HDFS-8383 > URL: https://issues.apache.org/jira/browse/HDFS-8383 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo Nicholas Sze >Assignee: Walter Su > Attachments: HDFS-8383.00.patch, HDFS-8383.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC
[ https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731635#comment-14731635 ] Hadoop QA commented on HDFS-9011: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 18m 34s | Findbugs (version ) appears to be broken on trunk. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 10 new or modified test files. | | {color:green}+1{color} | javac | 10m 1s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 12m 18s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 25s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 1m 0s | There were no new checkstyle issues. | | {color:red}-1{color} | whitespace | 0m 4s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. | | {color:green}+1{color} | install | 1m 41s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 37s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 59s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 42s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 97m 37s | Tests failed in hadoop-hdfs. | | | | 149m 4s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.tools.TestDFSHAAdminMiniCluster | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.TestQuota | | | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA | | | hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover | | Timed out tests | org.apache.hadoop.hdfs.TestDFSStartupVersions | | | org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754275/HDFS-9011.002.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / e1feaf6 | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/12318/artifact/patchprocess/whitespace.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12318/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12318/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12318/console | This message was automatically generated. > Support splitting BlockReport of a storage into multiple RPC > > > Key: HDFS-9011 > URL: https://issues.apache.org/jira/browse/HDFS-9011 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Jing Zhao >Assignee: Jing Zhao > Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, > HDFS-9011.002.patch > > > Currently if a DataNode has too many blocks (more than 1m by default), it > sends multiple RPC to the NameNode for the block report, each RPC contains > report for a single storage. However, in practice we've seen sometimes even a > single storage can contains large amount of blocks and the report even > exceeds the max RPC data length. It may be helpful to support sending > multiple RPC for the block report of a storage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8929) Add a metric to expose the timestamp of the last journal
[ https://issues.apache.org/jira/browse/HDFS-8929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731723#comment-14731723 ] Brahma Reddy Battula commented on HDFS-8929: [~surendrasingh] thanks for updating the patch..Latest Patch LGTM,[~ajisakaa] do you have some comments on this latest patch..? > Add a metric to expose the timestamp of the last journal > > > Key: HDFS-8929 > URL: https://issues.apache.org/jira/browse/HDFS-8929 > Project: Hadoop HDFS > Issue Type: New Feature > Components: journal-node >Reporter: Akira AJISAKA >Assignee: Surendra Singh Lilhore > Attachments: HDFS-8929-001.patch, HDFS-8929-002.patch, > HDFS-8929-003.patch > > > If there are three JNs and only one JN is failing to journal, we can detect > it by monitoring the difference of the last written transaction id among JNs > from NN WebUI or JN metrics. However, it's difficult to define the threshold > to alert because the increase rate of the number of transaction depends on > how busy the cluster is. Therefore I'd like to propose a metric to expose the > timestamp of the last journal. That way we can easily alert if a JN is > failing to journal for some fixed period. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9008) Balancer#Parameters class could use a builder pattern
[ https://issues.apache.org/jira/browse/HDFS-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731738#comment-14731738 ] Hadoop QA commented on HDFS-9008: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 17m 59s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 4 new or modified test files. | | {color:green}+1{color} | javac | 7m 50s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 8s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 25s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 1m 21s | The applied patch generated 1 new checkstyle issues (total was 47, now 38). | | {color:green}+1{color} | whitespace | 0m 1s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 30s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 30s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 17s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 161m 38s | Tests failed in hadoop-hdfs. | | | | 207m 18s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.TestPread | | | hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger | | | hadoop.hdfs.TestRollingUpgrade | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754291/HDFS-9008-trunk-v2.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / bcc85e3 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/12320/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12320/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12320/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12320/console | This message was automatically generated. > Balancer#Parameters class could use a builder pattern > - > > Key: HDFS-9008 > URL: https://issues.apache.org/jira/browse/HDFS-9008 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Chris Trezzo >Assignee: Chris Trezzo >Priority: Minor > Attachments: HDFS-9008-trunk-v1.patch, HDFS-9008-trunk-v2.patch > > > The Balancer#Parameters class is violating a few checkstyle rules. > # Instance variables are not privately scoped and do not have accessor > methods. > # The Balancer#Parameter constructor has too many arguments (according to > checkstyle). > Changing this class to use the builder pattern could fix both of these style > issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
[ https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731752#comment-14731752 ] Hadoop QA commented on HDFS-9010: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 23m 36s | Pre-patch trunk compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 6 new or modified test files. | | {color:green}+1{color} | javac | 9m 47s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 11m 38s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 28s | The applied patch does not increase the total number of release audit warnings. | | {color:red}-1{color} | checkstyle | 2m 7s | The applied patch generated 2 new checkstyle issues (total was 274, now 275). | | {color:green}+1{color} | whitespace | 0m 1s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 50s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 37s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 3m 48s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 42s | Pre-build of native portion | | {color:red}-1{color} | mapreduce tests | 80m 22s | Tests failed in hadoop-mapreduce-client-jobclient. | | {color:red}-1{color} | hdfs tests | 160m 53s | Tests failed in hadoop-hdfs. | | | | 299m 18s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.mapreduce.v2.TestNonExistentJob | | | hadoop.mapreduce.v2.TestUberAM | | | hadoop.mapreduce.v2.TestMROldApiJobs | | | hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities | | | hadoop.hdfs.TestLeaseRecovery | | | hadoop.hdfs.TestRollingUpgrade | | Timed out tests | org.apache.hadoop.mapreduce.TestLargeSort | | | org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution | | | org.apache.hadoop.mapreduce.v2.TestMRJobs | | | org.apache.hadoop.mapreduce.v2.TestMRAppWithCombiner | | | org.apache.hadoop.hdfs.security.TestDelegationToken | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754235/HDFS-9010.004.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / e1feaf6 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/12319/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt | | hadoop-mapreduce-client-jobclient test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12319/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12319/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12319/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12319/console | This message was automatically generated. > Replace NameNode.DEFAULT_PORT with > HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key > > > Key: HDFS-9010 > URL: https://issues.apache.org/jira/browse/HDFS-9010 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, > HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch > > > The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value. > This jira tracks the effort of replacing the {{NameNode.DEFAULT_PORT}} with > {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark > the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-9019) sticky bit permission denied error not informative enough
[ https://issues.apache.org/jira/browse/HDFS-9019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731481#comment-14731481 ] Hadoop QA commented on HDFS-9019: - \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 15m 44s | Findbugs (version ) appears to be broken on trunk. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 7m 59s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 0s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 31s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 33s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 30s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 9s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 74m 59s | Tests failed in hadoop-hdfs. | | | | 117m 25s | | \\ \\ || Reason || Tests || | Timed out tests | org.apache.hadoop.hdfs.TestPersistBlocks | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12754241/HDFS-9019.001.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 30db1ad | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12313/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12313/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12313/console | This message was automatically generated. > sticky bit permission denied error not informative enough > - > > Key: HDFS-9019 > URL: https://issues.apache.org/jira/browse/HDFS-9019 > Project: Hadoop HDFS > Issue Type: Bug > Components: security >Affects Versions: 2.6.0, 2.7.0, 2.7.1 >Reporter: Thejas M Nair >Assignee: Xiaoyu Yao > Labels: easyfix, newbie > Attachments: HDFS-9019.000.patch, HDFS-9019.001.patch > > > The check for sticky bit permission in FSPermissionChecker.java prints only > the child file name and the current owner. > It does not print the owner of the file and the parent directory. It would > help to have that printed as well for ease of debugging permission issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)