[jira] [Updated] (HDFS-6348) Secondary namenode - RMI Thread prevents JVM from exiting after main() completes
[ https://issues.apache.org/jira/browse/HDFS-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-6348: --- Attachment: HDFS-6348.patch Secondary namenode - RMI Thread prevents JVM from exiting after main() completes - Key: HDFS-6348 URL: https://issues.apache.org/jira/browse/HDFS-6348 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.3.0 Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.7.0 Attachments: HDFS-6348.patch, HDFS-6348.patch, secondaryNN_threaddump_after_exit.log Secondary Namenode is not exiting when there is RuntimeException occurred during startup. Say I configured wrong configuration, due to that validation failed and thrown RuntimeException as shown below. But when I check the environment SecondaryNamenode process is alive. When analysed, RMI Thread is still alive, since it is not a daemon thread JVM is nit exiting. I'm attaching threaddump to this JIRA for more details about the thread. {code} java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1900) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:199) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.init(BlockManager.java:256) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:635) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:260) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:205) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:695) Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1868) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1892) ... 6 more Caused by: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1774) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1866) ... 7 more 2014-05-07 14:27:04,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state 2014-05-07 14:27:04,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state 2014-05-07 14:31:04,926 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: STARTUP_MSG: {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-6348) Secondary namenode - RMI Thread prevents JVM from exiting after main() completes
[ https://issues.apache.org/jira/browse/HDFS-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-6348: --- Fix Version/s: 2.7.0 Status: Patch Available (was: Open) Secondary namenode - RMI Thread prevents JVM from exiting after main() completes - Key: HDFS-6348 URL: https://issues.apache.org/jira/browse/HDFS-6348 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.3.0 Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.7.0 Attachments: HDFS-6348.patch, HDFS-6348.patch, secondaryNN_threaddump_after_exit.log Secondary Namenode is not exiting when there is RuntimeException occurred during startup. Say I configured wrong configuration, due to that validation failed and thrown RuntimeException as shown below. But when I check the environment SecondaryNamenode process is alive. When analysed, RMI Thread is still alive, since it is not a daemon thread JVM is nit exiting. I'm attaching threaddump to this JIRA for more details about the thread. {code} java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1900) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:199) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.init(BlockManager.java:256) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:635) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:260) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:205) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:695) Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1868) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1892) ... 6 more Caused by: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1774) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1866) ... 7 more 2014-05-07 14:27:04,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state 2014-05-07 14:27:04,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state 2014-05-07 14:31:04,926 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: STARTUP_MSG: {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7736) Typos in dfsadmin/fsck/snapshotDiff Commands
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316469#comment-14316469 ] Brahma Reddy Battula commented on HDFS-7736: Hi [~ajisakaa] can you please look at updated patch..? -Thanks. Typos in dfsadmin/fsck/snapshotDiff Commands Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7753) Fix Multithreaded correctness Warnings in BackupImage.java
[ https://issues.apache.org/jira/browse/HDFS-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316356#comment-14316356 ] Hudson commented on HDFS-7753: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2052 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2052/]) HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage. Contributed by Rakesh R and Konstantin Shvachko. (shv: rev a4ceea60f57a32d531549e492aa5894dd34e0d0f) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java Fix Multithreaded correctness Warnings in BackupImage.java -- Key: HDFS-7753 URL: https://issues.apache.org/jira/browse/HDFS-7753 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Rakesh R Assignee: Konstantin Shvachko Fix For: 2.7.0 Attachments: HDFS-7753-02.patch, HDFS-7753.patch Inconsistent synchronization of org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem; locked 60% of time {code} Bug type IS2_INCONSISTENT_SYNC (click for details) In class org.apache.hadoop.hdfs.server.namenode.BackupImage Field org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem Synchronized 60% of the time Unsynchronized access at BackupImage.java:[line 97] Unsynchronized access at BackupImage.java:[line 261] Synchronized access at BackupImage.java:[line 197] Synchronized access at BackupImage.java:[line 212] Synchronized access at BackupImage.java:[line 295] {code} https://builds.apache.org/job/PreCommit-HDFS-Build/9493//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html#Details -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7769) TestHDFSCLI create files in hdfs project root dir
[ https://issues.apache.org/jira/browse/HDFS-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316358#comment-14316358 ] Hudson commented on HDFS-7769: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2052 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2052/]) HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir. (szetszwo: rev 7c6b6547eeed110e1a842e503bfd33afe04fa814) * hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt TestHDFSCLI create files in hdfs project root dir - Key: HDFS-7769 URL: https://issues.apache.org/jira/browse/HDFS-7769 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Tsz Wo Nicholas Sze Assignee: Tsz Wo Nicholas Sze Priority: Trivial Fix For: 2.7.0 Attachments: h7769_20150210.patch, h7769_20150210b.patch After running TestHDFSCLI, two files (data and .data.crc) remain in hdfs project root dir. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7777) Consolidate the HA NN documentation down to one
Allen Wittenauer created HDFS-: -- Summary: Consolidate the HA NN documentation down to one Key: HDFS- URL: https://issues.apache.org/jira/browse/HDFS- Project: Hadoop HDFS Issue Type: Improvement Reporter: Allen Wittenauer These are nearly the same document now. Let's consolidate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-316) Balancer should run for a configurable # of iterations
[ https://issues.apache.org/jira/browse/HDFS-316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316457#comment-14316457 ] Hudson commented on HDFS-316: - FAILURE: Integrated in Hadoop-trunk-Commit #7072 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7072/]) HDFS-316. Balancer should run for a configurable # of iterations (Xiaoyu Yao via aw) (aw: rev b94c1117a28e996adee68fe0e181eb6f536289f4) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HDFSCommands.apt.vm * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestMover.java Balancer should run for a configurable # of iterations -- Key: HDFS-316 URL: https://issues.apache.org/jira/browse/HDFS-316 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover Affects Versions: 2.4.1 Reporter: Brian Bockelman Assignee: Xiaoyu Yao Priority: Minor Labels: newbie Fix For: 3.0.0 Attachments: HDFS-316.0.patch, HDFS-316.1.patch, HDFS-316.2.patch, HDFS-316.3.patch, HDFS-316.4.patch The balancer currently exits if nothing has changed after 5 iterations. Our site would like to constantly balance a stream of incoming data; we would like to be able to set the number of iterations it does nothing for before exiting; even better would be if we set it to a negative number and could continuously run this as a daemon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7723) Quota By Storage Type namenode implemenation
[ https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316494#comment-14316494 ] Xiaoyu Yao commented on HDFS-7723: -- Jenkins cannot apply patch with binary diff produced by git -diff --binary. TestOfflineEditsViewer should pass with the updated binary file: editsStored attached to HDFS-7584 at https://issues.apache.org/jira/secure/attachment/12695300/editsStored [~arpitagarwal], let me know if you want to me reattach it under HDFS-7723. Quota By Storage Type namenode implemenation Key: HDFS-7723 URL: https://issues.apache.org/jira/browse/HDFS-7723 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch, HDFS-7723.2.patch, HDFS-7723.3.patch, HDFS-7723.4.patch, HDFS-7723.5.patch, HDFS-7723.6.patch This includes: 1) new editlog to persist quota by storage type op 2) corresponding fsimage load/save the new op. 3) QuotaCount refactor to update usage of the storage types for quota enforcement 4) Snapshot support 5) Unit test update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7736) Typos in dfsadmin/fsck/snapshotDiff Commands
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316556#comment-14316556 ] Akira AJISAKA commented on HDFS-7736: - LGTM, +1 pending Jenkins. Thanks [~brahmareddy] for the update. Typos in dfsadmin/fsck/snapshotDiff Commands Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html
[ https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-7772: --- Component/s: documentation Document hdfs balancer -exclude/-include option in HDFSCommands.html Key: HDFS-7772 URL: https://issues.apache.org/jira/browse/HDFS-7772 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Priority: Trivial hdfs balancer -exclude/-include option are displayed in the command line help but not HTML documentation page. This JIRA is opened to add it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-316) Balancer should run for a configurable # of iterations
[ https://issues.apache.org/jira/browse/HDFS-316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-316: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks! Balancer should run for a configurable # of iterations -- Key: HDFS-316 URL: https://issues.apache.org/jira/browse/HDFS-316 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover Affects Versions: 2.4.1 Reporter: Brian Bockelman Assignee: Xiaoyu Yao Priority: Minor Labels: newbie Fix For: 3.0.0 Attachments: HDFS-316.0.patch, HDFS-316.1.patch, HDFS-316.2.patch, HDFS-316.3.patch, HDFS-316.4.patch The balancer currently exits if nothing has changed after 5 iterations. Our site would like to constantly balance a stream of incoming data; we would like to be able to set the number of iterations it does nothing for before exiting; even better would be if we set it to a negative number and could continuously run this as a daemon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7604) Track and display failed DataNode storage locations in NameNode.
[ https://issues.apache.org/jira/browse/HDFS-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316765#comment-14316765 ] Chris Nauroth commented on HDFS-7604: - Thank you for the review, Jitendra. As a heads-up, I'm going to have to ask you to look at a separate branch-2 patch. There are some recent DataNode changes that are in trunk only, and that causes conflicts for this patch on branch-2. Part of the problem is related to HDFS-7496, and I've asked a question on that issue. I'll need resolution on that question before I can post a branch-2 patch. Track and display failed DataNode storage locations in NameNode. Key: HDFS-7604 URL: https://issues.apache.org/jira/browse/HDFS-7604 Project: Hadoop HDFS Issue Type: Improvement Components: datanode, namenode Reporter: Chris Nauroth Assignee: Chris Nauroth Attachments: HDFS-7604-screenshot-1.png, HDFS-7604-screenshot-2.png, HDFS-7604-screenshot-3.png, HDFS-7604-screenshot-4.png, HDFS-7604-screenshot-5.png, HDFS-7604-screenshot-6.png, HDFS-7604-screenshot-7.png, HDFS-7604.001.patch, HDFS-7604.002.patch, HDFS-7604.004.patch, HDFS-7604.005.patch, HDFS-7604.prototype.patch During heartbeats, the DataNode can report a list of its storage locations that have been taken out of service due to failure (such as due to a bad disk or a permissions problem). The NameNode can track these failed storage locations and then report them in JMX and the NameNode web UI. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7736) Typos in dfsadmin/fsck/snapshotDiff Commands
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316809#comment-14316809 ] Hadoop QA commented on HDFS-7736: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12698104/HDFS-7736-004.patch against trunk revision b015fec. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9539//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9539//console This message is automatically generated. Typos in dfsadmin/fsck/snapshotDiff Commands Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7723) Quota By Storage Type namenode implemenation
[ https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316826#comment-14316826 ] Arpit Agarwal commented on HDFS-7723: - Looks like we can't merge this to branch-2 as-is until HDFS-3689 gets merged first. I've asked [~jingzhao] if HDFS-3689 is ready to be merged. Thanks. Quota By Storage Type namenode implemenation Key: HDFS-7723 URL: https://issues.apache.org/jira/browse/HDFS-7723 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Affects Versions: 3.0.0 Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch, HDFS-7723.2.patch, HDFS-7723.3.patch, HDFS-7723.4.patch, HDFS-7723.5.patch, HDFS-7723.6.patch, editsStored This includes: 1) new editlog to persist quota by storage type op 2) corresponding fsimage load/save the new op. 3) QuotaCount refactor to update usage of the storage types for quota enforcement 4) Snapshot support 5) Unit test update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7736) Fix typos in dfsadmin/fsck/snapshotDiff usage messages
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316840#comment-14316840 ] Hudson commented on HDFS-7736: -- FAILURE: Integrated in Hadoop-trunk-Commit #7078 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7078/]) HDFS-7736. Fix typos in dfsadmin/fsck/snapshotDiff usage messages. Contributed by Brahma Reddy Battula. (wheat9: rev f80c9888fa0c1a11967560be3c37dfc1e30da2c3) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/LsSnapshottableDir.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/snapshot/SnapshotDiff.java Fix typos in dfsadmin/fsck/snapshotDiff usage messages -- Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7760) Document truncate for WebHDFS.
[ https://issues.apache.org/jira/browse/HDFS-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316678#comment-14316678 ] Hudson commented on HDFS-7760: -- FAILURE: Integrated in Hadoop-trunk-Commit #7075 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7075/]) HDFS-7760. Document truncate for WebHDFS. Contributed by Konstantin Shvachko. (shv: rev e42fc1a251e91d25dbc4b3728b3cf4554ca7bee1) * hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Document truncate for WebHDFS. -- Key: HDFS-7760 URL: https://issues.apache.org/jira/browse/HDFS-7760 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Affects Versions: 2.7.0 Reporter: Yi Liu Assignee: Konstantin Shvachko Priority: Minor Fix For: 2.7.0 Attachments: HDFS-7760-02.patch, HDFS-7760.patch The JIRA is to further update user doc for truncate, for example, WebHDFS and so on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7760) Document truncate for WebHDFS.
[ https://issues.apache.org/jira/browse/HDFS-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-7760: -- Summary: Document truncate for WebHDFS. (was: Further update user doc for truncate) Renamed from: Further update user doc for truncate Document truncate for WebHDFS. -- Key: HDFS-7760 URL: https://issues.apache.org/jira/browse/HDFS-7760 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Affects Versions: 2.7.0 Reporter: Yi Liu Assignee: Konstantin Shvachko Priority: Minor Fix For: 2.7.0 Attachments: HDFS-7760-02.patch, HDFS-7760.patch The JIRA is to further update user doc for truncate, for example, WebHDFS and so on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7723) Quota By Storage Type namenode implemenation
[ https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316736#comment-14316736 ] Arpit Agarwal commented on HDFS-7723: - +1 for the v6 patch. I will commit it to trunk shortly. Quota By Storage Type namenode implemenation Key: HDFS-7723 URL: https://issues.apache.org/jira/browse/HDFS-7723 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch, HDFS-7723.2.patch, HDFS-7723.3.patch, HDFS-7723.4.patch, HDFS-7723.5.patch, HDFS-7723.6.patch, editsStored This includes: 1) new editlog to persist quota by storage type op 2) corresponding fsimage load/save the new op. 3) QuotaCount refactor to update usage of the storage types for quota enforcement 4) Snapshot support 5) Unit test update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7723) Quota By Storage Type namenode implemenation
[ https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-7723: Component/s: (was: datanode) Affects Version/s: 3.0.0 I committed it to trunk. Keeping the jira open for branch-2 merge. Thank you for taking up this feature [~xyao]! This is a very clean feature design and implementation. Quota By Storage Type namenode implemenation Key: HDFS-7723 URL: https://issues.apache.org/jira/browse/HDFS-7723 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Affects Versions: 3.0.0 Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch, HDFS-7723.2.patch, HDFS-7723.3.patch, HDFS-7723.4.patch, HDFS-7723.5.patch, HDFS-7723.6.patch, editsStored This includes: 1) new editlog to persist quota by storage type op 2) corresponding fsimage load/save the new op. 3) QuotaCount refactor to update usage of the storage types for quota enforcement 4) Snapshot support 5) Unit test update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-6759) Namenode Web UI only display 1 datanode when multiple datanodes exist on same hostname under different ports
[ https://issues.apache.org/jira/browse/HDFS-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai resolved HDFS-6759. -- Resolution: Duplicate Duplicate of HDFS-7303 Namenode Web UI only display 1 datanode when multiple datanodes exist on same hostname under different ports Key: HDFS-6759 URL: https://issues.apache.org/jira/browse/HDFS-6759 Project: Hadoop HDFS Issue Type: Bug Components: datanode, namenode Affects Versions: 2.4.0, 2.4.1 Environment: Centos 6.5 Reporter: Konnjuta Assignee: liu chang Priority: Trivial NameNode Web UI only display 1 datanode when multiple datanodes exist on same hostname under different ports. This seems to affect only version 2.x Version 1.x working as expected. This is a development environment so multiple datanodes on a single hostname is common. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager
[ https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316793#comment-14316793 ] Chris Douglas commented on HDFS-7411: - bq. The -1 is not for the refactoring. It is for keeping the existing behavior. Andrew, even though you prefer estimates or averages that approximate the existing behavior, halting when either of the limits are hit would move this forward. Nicholas, would you be OK changing the default so this uses the new algorithm in clusters where the node limit is not explicitly configured (default value for nodes is {{Integer.MAX_VALUE}})? You're also OK enforcing the existing semantics in the new code? Refactor and improve decommissioning logic into DecommissionManager --- Key: HDFS-7411 URL: https://issues.apache.org/jira/browse/HDFS-7411 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.5.1 Reporter: Andrew Wang Assignee: Andrew Wang Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, hdfs-7411.009.patch, hdfs-7411.010.patch Would be nice to split out decommission logic from DatanodeManager to DecommissionManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7723) Quota By Storage Type namenode implemenation
[ https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-7723: Attachment: editsStored Quota By Storage Type namenode implemenation Key: HDFS-7723 URL: https://issues.apache.org/jira/browse/HDFS-7723 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch, HDFS-7723.2.patch, HDFS-7723.3.patch, HDFS-7723.4.patch, HDFS-7723.5.patch, HDFS-7723.6.patch, editsStored This includes: 1) new editlog to persist quota by storage type op 2) corresponding fsimage load/save the new op. 3) QuotaCount refactor to update usage of the storage types for quota enforcement 4) Snapshot support 5) Unit test update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode by reference-counting the volume instances
[ https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316761#comment-14316761 ] Chris Nauroth commented on HDFS-7496: - The trunk commit for this patch (git hash b7f4a3156c0f5c600816c469637237ba6c9b330c) included a test class named {{FsVolumeListTest}}. It appears that this class was not included in the cherry-pick to branch-2 (git hash 5ec2b6caa9d63123a88f407f734319d4ac6038a9). Can one of the original contributors or reviewers please clarify whether or not this class was intended to be committed? I ask because it's causing a merge conflict now for HDFS-7604. Also, if the intention is to keep this class, then it will need to be renamed. It has not actually been running in pre-commit. That's because our maven-surefire-plugin configuration in hadoop-project/pom.xml is set to match only on test classes beginning with Test in the name: {code} includes include**/Test*.java/include /includes {code} This is done to prevent the plugin from erroneously trying to run helper classes under src/test/java as if they were JUnit suites. Fix FsVolume removal race conditions on the DataNode by reference-counting the volume instances --- Key: HDFS-7496 URL: https://issues.apache.org/jira/browse/HDFS-7496 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Lei (Eddy) Xu Fix For: 2.7.0 Attachments: HDFS-7496-branch-2.000.patch, HDFS-7496.000.patch, HDFS-7496.001.patch, HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, HDFS-7496.004.patch, HDFS-7496.005.patch, HDFS-7496.006.patch, HDFS-7496.007.patch We discussed a few FsVolume removal race conditions on the DataNode in HDFS-7489. We should figure out a way to make removing an FsVolume safe. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7723) Quota By Storage Type namenode implemenation
[ https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316762#comment-14316762 ] Hudson commented on HDFS-7723: -- FAILURE: Integrated in Hadoop-trunk-Commit #7076 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7076/]) HDFS-7723. Quota By Storage Type namenode implemenation. (Contributed by Xiaoyu Yao) (arp: rev 5dae97a584d30cef3e34141edfaca49c4ec57913) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ContentSummaryComputationContext.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiff.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestQuotaByStorageType.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/QuotaCounts.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Quota.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DirectoryWithQuotaFeature.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/EnumCounters.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirDeleteOp.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java *
[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode by reference-counting the volume instances
[ https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316854#comment-14316854 ] Lei (Eddy) Xu commented on HDFS-7496: - Hi, [~cnauroth] This was my fault to misname the {{FsVolumeListTest}} and not include this test in branch-2. Very sorry for the inconvenience. Shall I make a patch to correct it in this JIRA or file a follow JIRA? Fix FsVolume removal race conditions on the DataNode by reference-counting the volume instances --- Key: HDFS-7496 URL: https://issues.apache.org/jira/browse/HDFS-7496 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Lei (Eddy) Xu Fix For: 2.7.0 Attachments: HDFS-7496-branch-2.000.patch, HDFS-7496.000.patch, HDFS-7496.001.patch, HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, HDFS-7496.004.patch, HDFS-7496.005.patch, HDFS-7496.006.patch, HDFS-7496.007.patch We discussed a few FsVolume removal race conditions on the DataNode in HDFS-7489. We should figure out a way to make removing an FsVolume safe. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7760) Further update user doc for truncate
[ https://issues.apache.org/jira/browse/HDFS-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-7760: -- Resolution: Fixed Fix Version/s: 2.7.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I just committed this. Further update user doc for truncate Key: HDFS-7760 URL: https://issues.apache.org/jira/browse/HDFS-7760 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Affects Versions: 2.7.0 Reporter: Yi Liu Assignee: Konstantin Shvachko Priority: Minor Fix For: 2.7.0 Attachments: HDFS-7760-02.patch, HDFS-7760.patch The JIRA is to further update user doc for truncate, for example, WebHDFS and so on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7704) DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out.
[ https://issues.apache.org/jira/browse/HDFS-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316732#comment-14316732 ] Hadoop QA commented on HDFS-7704: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12697515/HDFS-7704-v5.patch against trunk revision c3da2db. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9538//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9538//console This message is automatically generated. DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out. --- Key: HDFS-7704 URL: https://issues.apache.org/jira/browse/HDFS-7704 Project: Hadoop HDFS Issue Type: Bug Components: datanode, namenode Affects Versions: 2.5.0 Reporter: Rushabh S Shah Assignee: Rushabh S Shah Attachments: HDFS-7704-v2.patch, HDFS-7704-v3.patch, HDFS-7704-v4.patch, HDFS-7704-v5.patch, HDFS-7704.patch There are couple of synchronous calls in BPOfferservice (i.e reportBadBlocks and trySendErrorReport) which will wait for both of the actor threads to process this calls. This calls are made with writeLock acquired. When reportBadBlocks() is blocked at the RPC layer due to unreachable NN, subsequent heartbeat response processing has to wait for the write lock. It eventually gets through, but takes too long and it blocks the next heartbeat. In our HA cluster setup, the standby namenode was taking a long time to process the request. Requesting improvement in datanode to make the above calls asynchronous since these reports don't have any specific deadlines, so extra few seconds of delay should be acceptable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7736) Fix typos in dfsadmin/fsck/snapshotDiff usage messages
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai updated HDFS-7736: - Resolution: Fixed Status: Resolved (was: Patch Available) I've committed it into trunk and branch-2. Thanks Brahma for the contribution and Akira for the reviews. Fix typos in dfsadmin/fsck/snapshotDiff usage messages -- Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7682) {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes non-snapshotted content
[ https://issues.apache.org/jira/browse/HDFS-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316843#comment-14316843 ] Hadoop QA commented on HDFS-7682: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12698108/HDFS-7682.003.patch against trunk revision b94c111. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9540//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9540//console This message is automatically generated. {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes non-snapshotted content Key: HDFS-7682 URL: https://issues.apache.org/jira/browse/HDFS-7682 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Charles Lamb Assignee: Charles Lamb Attachments: HDFS-7682.000.patch, HDFS-7682.001.patch, HDFS-7682.002.patch, HDFS-7682.003.patch DistributedFileSystem#getFileChecksum of a snapshotted file includes non-snapshotted content. The reason why this happens is because DistributedFileSystem#getFileChecksum simply calculates the checksum of all of the CRCs from the blocks in the file. But, in the case of a snapshotted file, we don't want to include data in the checksum that was appended to the last block in the file after the snapshot was taken. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use
[ https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316665#comment-14316665 ] Chris Nauroth commented on HDFS-7684: - The Findbugs warning is likely unrelated, fixed last night in HDFS-7754. I submitted another Jenkins run: https://builds.apache.org/job/PreCommit-HDFS-Build/9542/ The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use -- Key: HDFS-7684 URL: https://issues.apache.org/jira/browse/HDFS-7684 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.4.1, 2.5.1 Reporter: Tianyin Xu Assignee: Anu Engineer Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, HDFS.7684.002.patch With the following setting, property namedfs.namenode.secondary.http-address/name valuemyhostname:50090 /value /property The secondary NameNode could not be started $ hadoop-daemon.sh start secondarynamenode starting secondarynamenode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out /home/hadoop/hadoop-2.4.1/bin/hdfs Exception in thread main java.lang.IllegalArgumentException: Does not contain a valid host:port authority: myhostname:50090 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651) We were really confused and misled by the log message: we thought about the DNS problems (changed to IP address but no success) and the network problem (tried to test the connections with no success...) It turned out to be that the setting is not trimmed and the additional space character in the end of the setting caused the problem... OMG!!!... Searching on the Internet, we find we are really not alone. So many users encountered similar trim problems! The following lists a few: http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh https://issues.apache.org/jira/browse/HDFS-2799 https://issues.apache.org/jira/browse/HBASE-6973 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7736) Fix typos in dfsadmin/fsck/snapshotDiff usage messages
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haohui Mai updated HDFS-7736: - Summary: Fix typos in dfsadmin/fsck/snapshotDiff usage messages (was: Typos in dfsadmin/fsck/snapshotDiff Commands) Fix typos in dfsadmin/fsck/snapshotDiff usage messages -- Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7736) Typos in dfsadmin/fsck/snapshotDiff Commands
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316812#comment-14316812 ] Haohui Mai commented on HDFS-7736: -- I'm committing this. Typos in dfsadmin/fsck/snapshotDiff Commands Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7723) Quota By Storage Type namenode implemenation
[ https://issues.apache.org/jira/browse/HDFS-7723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316752#comment-14316752 ] Hadoop QA commented on HDFS-7723: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12698155/editsStored against trunk revision 5dae97a. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9543//console This message is automatically generated. Quota By Storage Type namenode implemenation Key: HDFS-7723 URL: https://issues.apache.org/jira/browse/HDFS-7723 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Affects Versions: 3.0.0 Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Attachments: HDFS-7723.0.patch, HDFS-7723.1.patch, HDFS-7723.2.patch, HDFS-7723.3.patch, HDFS-7723.4.patch, HDFS-7723.5.patch, HDFS-7723.6.patch, editsStored This includes: 1) new editlog to persist quota by storage type op 2) corresponding fsimage load/save the new op. 3) QuotaCount refactor to update usage of the storage types for quota enforcement 4) Snapshot support 5) Unit test update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7720) Quota by Storage Type API, tools and ClientNameNode Protocol changes
[ https://issues.apache.org/jira/browse/HDFS-7720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316821#comment-14316821 ] Hudson commented on HDFS-7720: -- FAILURE: Integrated in Hadoop-trunk-Commit #7077 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7077/]) HDFS-7720. Update CHANGES.txt to reflect merge to branch-2. (arp: rev 078f3a9bc7ce9d06ae2de3e65a099ee655bce483) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Quota by Storage Type API, tools and ClientNameNode Protocol changes Key: HDFS-7720 URL: https://issues.apache.org/jira/browse/HDFS-7720 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Fix For: 3.0.0 Attachments: HDFS-7720.0.patch, HDFS-7720.1.patch, HDFS-7720.2.patch, HDFS-7720.3.patch, HDFS-7720.4.patch Split the patch into small ones based on the feedback. This one covers the HDFS API changes, tool changes and ClientNameNode protocol changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7716) Erasure Coding: extend BlockInfo to handle EC info
[ https://issues.apache.org/jira/browse/HDFS-7716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316642#comment-14316642 ] Zhe Zhang commented on HDFS-7716: - Thanks Jing! It seems this patch passes all unit tests. We have a Jenkins job (https://builds.apache.org/job/Hadoop-HDFS-7285-nightly/) and I just added you as a watcher. Erasure Coding: extend BlockInfo to handle EC info -- Key: HDFS-7716 URL: https://issues.apache.org/jira/browse/HDFS-7716 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Jing Zhao Assignee: Jing Zhao Attachments: HDFS-7716.000.patch, HDFS-7716.001.patch, HDFS-7716.002.patch, HDFS-7716.003.patch The current BlockInfo's implementation only supports the replication mechanism. To use the same blocksMap handling block group and its data/parity blocks, we need to define a new BlockGroupInfo class. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-3689) Add support for variable length block
[ https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316816#comment-14316816 ] Arpit Agarwal commented on HDFS-3689: - Hi [~jingzhao], can we merge this change to branch-2? Thanks. Add support for variable length block - Key: HDFS-3689 URL: https://issues.apache.org/jira/browse/HDFS-3689 Project: Hadoop HDFS Issue Type: New Feature Components: datanode, hdfs-client, namenode Affects Versions: 3.0.0 Reporter: Suresh Srinivas Assignee: Jing Zhao Fix For: 3.0.0 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch, HDFS-3689.007.patch, HDFS-3689.008.patch, HDFS-3689.008.patch, HDFS-3689.009.patch, HDFS-3689.009.patch, HDFS-3689.010.patch, editsStored Currently HDFS supports fixed length blocks. Supporting variable length block will allow new use cases and features to be built on top of HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7704) DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out.
[ https://issues.apache.org/jira/browse/HDFS-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316267#comment-14316267 ] Hadoop QA commented on HDFS-7704: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12697515/HDFS-7704-v5.patch against trunk revision c541a37. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9536//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9536//console This message is automatically generated. DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out. --- Key: HDFS-7704 URL: https://issues.apache.org/jira/browse/HDFS-7704 Project: Hadoop HDFS Issue Type: Bug Components: datanode, namenode Affects Versions: 2.5.0 Reporter: Rushabh S Shah Assignee: Rushabh S Shah Attachments: HDFS-7704-v2.patch, HDFS-7704-v3.patch, HDFS-7704-v4.patch, HDFS-7704-v5.patch, HDFS-7704.patch There are couple of synchronous calls in BPOfferservice (i.e reportBadBlocks and trySendErrorReport) which will wait for both of the actor threads to process this calls. This calls are made with writeLock acquired. When reportBadBlocks() is blocked at the RPC layer due to unreachable NN, subsequent heartbeat response processing has to wait for the write lock. It eventually gets through, but takes too long and it blocks the next heartbeat. In our HA cluster setup, the standby namenode was taking a long time to process the request. Requesting improvement in datanode to make the above calls asynchronous since these reports don't have any specific deadlines, so extra few seconds of delay should be acceptable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7682) {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes non-snapshotted content
[ https://issues.apache.org/jira/browse/HDFS-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb updated HDFS-7682: --- Attachment: HDFS-7682.003.patch [~jingzhao], Thanks for the comments. I think the latest patch address them by changing the test to a check for the src path being a snapshotted file. Charles {{DistributedFileSystem#getFileChecksum}} of a snapshotted file includes non-snapshotted content Key: HDFS-7682 URL: https://issues.apache.org/jira/browse/HDFS-7682 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Charles Lamb Assignee: Charles Lamb Attachments: HDFS-7682.000.patch, HDFS-7682.001.patch, HDFS-7682.002.patch, HDFS-7682.003.patch DistributedFileSystem#getFileChecksum of a snapshotted file includes non-snapshotted content. The reason why this happens is because DistributedFileSystem#getFileChecksum simply calculates the checksum of all of the CRCs from the blocks in the file. But, in the case of a snapshotted file, we don't want to include data in the checksum that was appended to the last block in the file after the snapshot was taken. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7769) TestHDFSCLI create files in hdfs project root dir
[ https://issues.apache.org/jira/browse/HDFS-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316275#comment-14316275 ] Hudson commented on HDFS-7769: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2033 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2033/]) HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir. (szetszwo: rev 7c6b6547eeed110e1a842e503bfd33afe04fa814) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml TestHDFSCLI create files in hdfs project root dir - Key: HDFS-7769 URL: https://issues.apache.org/jira/browse/HDFS-7769 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Tsz Wo Nicholas Sze Assignee: Tsz Wo Nicholas Sze Priority: Trivial Fix For: 2.7.0 Attachments: h7769_20150210.patch, h7769_20150210b.patch After running TestHDFSCLI, two files (data and .data.crc) remain in hdfs project root dir. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7753) Fix Multithreaded correctness Warnings in BackupImage.java
[ https://issues.apache.org/jira/browse/HDFS-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316273#comment-14316273 ] Hudson commented on HDFS-7753: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2033 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2033/]) HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage. Contributed by Rakesh R and Konstantin Shvachko. (shv: rev a4ceea60f57a32d531549e492aa5894dd34e0d0f) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fix Multithreaded correctness Warnings in BackupImage.java -- Key: HDFS-7753 URL: https://issues.apache.org/jira/browse/HDFS-7753 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Rakesh R Assignee: Konstantin Shvachko Fix For: 2.7.0 Attachments: HDFS-7753-02.patch, HDFS-7753.patch Inconsistent synchronization of org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem; locked 60% of time {code} Bug type IS2_INCONSISTENT_SYNC (click for details) In class org.apache.hadoop.hdfs.server.namenode.BackupImage Field org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem Synchronized 60% of the time Unsynchronized access at BackupImage.java:[line 97] Unsynchronized access at BackupImage.java:[line 261] Synchronized access at BackupImage.java:[line 197] Synchronized access at BackupImage.java:[line 212] Synchronized access at BackupImage.java:[line 295] {code} https://builds.apache.org/job/PreCommit-HDFS-Build/9493//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html#Details -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7704) DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out.
[ https://issues.apache.org/jira/browse/HDFS-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316313#comment-14316313 ] Kihwal Lee commented on HDFS-7704: -- The precommit was killed by another job (#9537) and ran only for 18 minutes. Kicking it again to get unit tests to run. DN heartbeat to Active NN may be blocked and expire if connection to Standby NN continues to time out. --- Key: HDFS-7704 URL: https://issues.apache.org/jira/browse/HDFS-7704 Project: Hadoop HDFS Issue Type: Bug Components: datanode, namenode Affects Versions: 2.5.0 Reporter: Rushabh S Shah Assignee: Rushabh S Shah Attachments: HDFS-7704-v2.patch, HDFS-7704-v3.patch, HDFS-7704-v4.patch, HDFS-7704-v5.patch, HDFS-7704.patch There are couple of synchronous calls in BPOfferservice (i.e reportBadBlocks and trySendErrorReport) which will wait for both of the actor threads to process this calls. This calls are made with writeLock acquired. When reportBadBlocks() is blocked at the RPC layer due to unreachable NN, subsequent heartbeat response processing has to wait for the write lock. It eventually gets through, but takes too long and it blocks the next heartbeat. In our HA cluster setup, the standby namenode was taking a long time to process the request. Requesting improvement in datanode to make the above calls asynchronous since these reports don't have any specific deadlines, so extra few seconds of delay should be acceptable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7769) TestHDFSCLI create files in hdfs project root dir
[ https://issues.apache.org/jira/browse/HDFS-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316329#comment-14316329 ] Hudson commented on HDFS-7769: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #102 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/102/]) HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir. (szetszwo: rev 7c6b6547eeed110e1a842e503bfd33afe04fa814) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml TestHDFSCLI create files in hdfs project root dir - Key: HDFS-7769 URL: https://issues.apache.org/jira/browse/HDFS-7769 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Tsz Wo Nicholas Sze Assignee: Tsz Wo Nicholas Sze Priority: Trivial Fix For: 2.7.0 Attachments: h7769_20150210.patch, h7769_20150210b.patch After running TestHDFSCLI, two files (data and .data.crc) remain in hdfs project root dir. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7753) Fix Multithreaded correctness Warnings in BackupImage.java
[ https://issues.apache.org/jira/browse/HDFS-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316327#comment-14316327 ] Hudson commented on HDFS-7753: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #102 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/102/]) HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage. Contributed by Rakesh R and Konstantin Shvachko. (shv: rev a4ceea60f57a32d531549e492aa5894dd34e0d0f) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fix Multithreaded correctness Warnings in BackupImage.java -- Key: HDFS-7753 URL: https://issues.apache.org/jira/browse/HDFS-7753 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.0 Reporter: Rakesh R Assignee: Konstantin Shvachko Fix For: 2.7.0 Attachments: HDFS-7753-02.patch, HDFS-7753.patch Inconsistent synchronization of org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem; locked 60% of time {code} Bug type IS2_INCONSISTENT_SYNC (click for details) In class org.apache.hadoop.hdfs.server.namenode.BackupImage Field org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem Synchronized 60% of the time Unsynchronized access at BackupImage.java:[line 97] Unsynchronized access at BackupImage.java:[line 261] Synchronized access at BackupImage.java:[line 197] Synchronized access at BackupImage.java:[line 212] Synchronized access at BackupImage.java:[line 295] {code} https://builds.apache.org/job/PreCommit-HDFS-Build/9493//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html#Details -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7736) Typos in dfsadmin/fsck/snapshotDiff Commands
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HDFS-7736: --- Attachment: HDFS-7736-004.patch Typos in dfsadmin/fsck/snapshotDiff Commands Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6348) Secondary namenode - RMI Thread prevents JVM from exiting after main() completes
[ https://issues.apache.org/jira/browse/HDFS-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316894#comment-14316894 ] Hadoop QA commented on HDFS-6348: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12698124/HDFS-6348.patch against trunk revision b379972. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.balancer.TestBalancer org.apache.hadoop.hdfs.TestDecommission Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9541//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9541//console This message is automatically generated. Secondary namenode - RMI Thread prevents JVM from exiting after main() completes - Key: HDFS-6348 URL: https://issues.apache.org/jira/browse/HDFS-6348 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.3.0 Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.7.0 Attachments: HDFS-6348.patch, HDFS-6348.patch, secondaryNN_threaddump_after_exit.log Secondary Namenode is not exiting when there is RuntimeException occurred during startup. Say I configured wrong configuration, due to that validation failed and thrown RuntimeException as shown below. But when I check the environment SecondaryNamenode process is alive. When analysed, RMI Thread is still alive, since it is not a daemon thread JVM is nit exiting. I'm attaching threaddump to this JIRA for more details about the thread. {code} java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1900) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:199) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.init(BlockManager.java:256) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:635) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:260) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:205) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:695) Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1868) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1892) ... 6 more Caused by: java.lang.ClassNotFoundException: Class com.huawei.hadoop.hdfs.server.blockmanagement.MyBlockPlacementPolicy not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1774) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1866) ... 7 more 2014-05-07 14:27:04,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state 2014-05-07 14:27:04,666 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state 2014-05-07 14:31:04,926 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: STARTUP_MSG: {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-7778: Attachment: HDFS-7778-trunk.000.patch HDFS-7778-branch2.000.patch Update the patch to rename {{FsVolumeListTest}} and add it back to branch-2. [~cnauroth] Would you mind to take a review of this? Thank you so much! Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-7778: Attachment: (was: HDFS-7778-branch2.000.patch) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7771) fuse_dfs should permit FILE: on the front of KRB5CCNAME
[ https://issues.apache.org/jira/browse/HDFS-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316930#comment-14316930 ] Hudson commented on HDFS-7771: -- FAILURE: Integrated in Hadoop-trunk-Commit #7079 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7079/]) HDFS-7771. fuse_dfs should permit FILE: on the front of KRB5CCNAME (cmccabe) (cmccabe: rev 50625e660ac0f76e7fe46d55df3d15cbbf058753) * hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_connect.c * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt fuse_dfs should permit FILE: on the front of KRB5CCNAME --- Key: HDFS-7771 URL: https://issues.apache.org/jira/browse/HDFS-7771 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.3.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Fix For: 2.7.0 Attachments: HDFS-7771.001.patch {{fuse_dfs}} should permit FILE: to appear on the front of the {{KRB5CCNAME}} environment variable. This prefix indicates that the kerberos ticket cache is stored in the following file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-7778: Attachment: HDFS-7778-branch2.000.patch Revised branch-2 patch. I did following changes: 1. Use {{git diff trunk -- **/FsVolumeList.java}} and etc to backport all diffs related to {{FsVolumeReference}} to branch-2 {{FsVolumeList/FsDatasetImpl/TestFsVolumeList/TestFsDatasetImpl}}. 2. Add the following line: {code:title=FsVolumeList#addVolume()} IOUtils.cleanup(null, ref); FsDatasetImpl.LOG.info(Added new volume: + ref.getVolume().getStorageID()); {code} Since in trunk, the reference will be released in the block scanner. Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317001#comment-14317001 ] Hadoop QA commented on HDFS-7778: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12698182/HDFS-7778-branch2.000.patch against trunk revision 50625e6. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9545//console This message is automatically generated. Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7771) fuse_dfs should permit FILE: on the front of KRB5CCNAME
[ https://issues.apache.org/jira/browse/HDFS-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316898#comment-14316898 ] Colin Patrick McCabe commented on HDFS-7771: Thanks, guys. Findbugs warning is about BackupImage (again), which was not modified by this patch. TestRetryCacheWithHA failure is unrelated, since this patch only modifies {{fuse_dfs}}, which is not tested by that test. Committing fuse_dfs should permit FILE: on the front of KRB5CCNAME --- Key: HDFS-7771 URL: https://issues.apache.org/jira/browse/HDFS-7771 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.3.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Attachments: HDFS-7771.001.patch {{fuse_dfs}} should permit FILE: to appear on the front of the {{KRB5CCNAME}} environment variable. This prefix indicates that the kerberos ticket cache is stored in the following file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-7778: Status: Patch Available (was: Open) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316945#comment-14316945 ] Chris Nauroth commented on HDFS-7778: - Thanks for filing this jira. The trunk patch looks good to me. On branch-2, I suspect the conflict is related to HDFS-7430, which only targeted 3.0.0. Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-7102) Null dereference in PacketReceiver#receiveNextPacket()
[ https://issues.apache.org/jira/browse/HDFS-7102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HDFS-7102. -- Resolution: Later Null dereference in PacketReceiver#receiveNextPacket() -- Key: HDFS-7102 URL: https://issues.apache.org/jira/browse/HDFS-7102 Project: Hadoop HDFS Issue Type: Bug Reporter: Ted Yu Priority: Minor {code} public void receiveNextPacket(ReadableByteChannel in) throws IOException { doRead(in, null); {code} doRead() would pass null as second parameter to (line 134): {code} doReadFully(ch, in, curPacketBuf); {code} which dereferences it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7496) Fix FsVolume removal race conditions on the DataNode by reference-counting the volume instances
[ https://issues.apache.org/jira/browse/HDFS-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316868#comment-14316868 ] Chris Nauroth commented on HDFS-7496: - [~eddyxu], thanks for the response. Let's handle it in a new jira, since this one has been closed for a while. Please feel free to contact me on the new jira for code review. Fix FsVolume removal race conditions on the DataNode by reference-counting the volume instances --- Key: HDFS-7496 URL: https://issues.apache.org/jira/browse/HDFS-7496 Project: Hadoop HDFS Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Lei (Eddy) Xu Fix For: 2.7.0 Attachments: HDFS-7496-branch-2.000.patch, HDFS-7496.000.patch, HDFS-7496.001.patch, HDFS-7496.002.patch, HDFS-7496.003.patch, HDFS-7496.003.patch, HDFS-7496.004.patch, HDFS-7496.005.patch, HDFS-7496.006.patch, HDFS-7496.007.patch We discussed a few FsVolume removal race conditions on the DataNode in HDFS-7489. We should figure out a way to make removing an FsVolume safe. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7771) fuse_dfs should permit FILE: on the front of KRB5CCNAME
[ https://issues.apache.org/jira/browse/HDFS-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-7771: --- Resolution: Fixed Fix Version/s: 2.7.0 Status: Resolved (was: Patch Available) fuse_dfs should permit FILE: on the front of KRB5CCNAME --- Key: HDFS-7771 URL: https://issues.apache.org/jira/browse/HDFS-7771 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.3.0 Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Fix For: 2.7.0 Attachments: HDFS-7771.001.patch {{fuse_dfs}} should permit FILE: to appear on the front of the {{KRB5CCNAME}} environment variable. This prefix indicates that the kerberos ticket cache is stored in the following file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
Lei (Eddy) Xu created HDFS-7778: --- Summary: Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-957) FSImage layout version should be only once file is complete
[ https://issues.apache.org/jira/browse/HDFS-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316883#comment-14316883 ] Todd Lipcon commented on HDFS-957: -- I don't think this is necessary anymore since HDFS-1073 was implemented a couple years ago. FSImage layout version should be only once file is complete --- Key: HDFS-957 URL: https://issues.apache.org/jira/browse/HDFS-957 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 0.22.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-957.txt Right now, the FSImage save code writes the LAYOUT_VERSION at the head of the file, along with some other headers, and then dumps the directory into the file. Instead, it should write a special IMAGE_IN_PROGRESS entry for the layout version, dump all of the data, then seek back to the head of the file to write the proper LAYOUT_VERSION. This would make it very easy to detect the case where the FSImage save got interrupted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager
[ https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316965#comment-14316965 ] Andrew Wang commented on HDFS-7411: --- bq. Andrew, even though you prefer estimates or averages that approximate the existing behavior, halting when either of the limits are hit would move this forward. Saying to use the node limit is underspecified, since the new code only iterates over decomming nodes, whereas the old code iterates over all nodes. This constitutes a major behavior change, but Nicholas said that iterating over non-decomming nodes is a bug that should be fixed. This is why I've been trying to elevate the discussion to what constitutes good or bad user experience. I have a hard time understanding why the iterating over just decomming nodes is an allowable change (even though it'll have a huge affect on pause times and decom rate), but the rest of my proposals are not okay because they constitute a behavior change. Refactor and improve decommissioning logic into DecommissionManager --- Key: HDFS-7411 URL: https://issues.apache.org/jira/browse/HDFS-7411 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.5.1 Reporter: Andrew Wang Assignee: Andrew Wang Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, hdfs-7411.009.patch, hdfs-7411.010.patch Would be nice to split out decommission logic from DatanodeManager to DecommissionManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316908#comment-14316908 ] Lei (Eddy) Xu commented on HDFS-7778: - Seems that there is more conflicts on branch-2 than simply adding the Test. I am working on it now. I will let you know once the conflicts being resolved, [~cnauroth]. Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7684) The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use
[ https://issues.apache.org/jira/browse/HDFS-7684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14316963#comment-14316963 ] Hadoop QA commented on HDFS-7684: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12697953/HDFS-7684.003.patch against trunk revision 22441ab. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9542//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9542//console This message is automatically generated. The host:port settings of dfs.namenode.secondary.http-address should be trimmed before use -- Key: HDFS-7684 URL: https://issues.apache.org/jira/browse/HDFS-7684 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.4.1, 2.5.1 Reporter: Tianyin Xu Assignee: Anu Engineer Attachments: HDFS-7684.003.patch, HDFS.7684.001.patch, HDFS.7684.002.patch With the following setting, property namedfs.namenode.secondary.http-address/name valuemyhostname:50090 /value /property The secondary NameNode could not be started $ hadoop-daemon.sh start secondarynamenode starting secondarynamenode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-xxx.out /home/hadoop/hadoop-2.4.1/bin/hdfs Exception in thread main java.lang.IllegalArgumentException: Does not contain a valid host:port authority: myhostname:50090 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:196) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163) at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.getHttpAddress(SecondaryNameNode.java:203) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:214) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.init(SecondaryNameNode.java:192) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:651) We were really confused and misled by the log message: we thought about the DNS problems (changed to IP address but no success) and the network problem (tried to test the connections with no success...) It turned out to be that the setting is not trimmed and the additional space character in the end of the setting caused the problem... OMG!!!... Searching on the Internet, we find we are really not alone. So many users encountered similar trim problems! The following lists a few: http://solaimurugan.blogspot.com/2013/10/hadoop-multi-node-cluster-configuration.html http://stackoverflow.com/questions/11263664/error-while-starting-the-hadoop-using-strat-all-sh https://issues.apache.org/jira/browse/HDFS-2799 https://issues.apache.org/jira/browse/HBASE-6973 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-957) FSImage layout version should be only once file is complete
[ https://issues.apache.org/jira/browse/HDFS-957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Todd Lipcon resolved HDFS-957. -- Resolution: Won't Fix FSImage layout version should be only once file is complete --- Key: HDFS-957 URL: https://issues.apache.org/jira/browse/HDFS-957 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 0.22.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Attachments: hdfs-957.txt Right now, the FSImage save code writes the LAYOUT_VERSION at the head of the file, along with some other headers, and then dumps the directory into the file. Instead, it should write a special IMAGE_IN_PROGRESS entry for the layout version, dump all of the data, then seek back to the head of the file to write the proper LAYOUT_VERSION. This would make it very easy to detect the case where the FSImage save got interrupted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6946) TestBalancerWithSaslDataTransfer fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317070#comment-14317070 ] Ted Yu commented on HDFS-6946: -- This test hasn't failed in recent builds. TestBalancerWithSaslDataTransfer fails in trunk --- Key: HDFS-6946 URL: https://issues.apache.org/jira/browse/HDFS-6946 Project: Hadoop HDFS Issue Type: Test Reporter: Ted Yu Assignee: Stephen Chu Priority: Minor Attachments: HDFS-6946.1.patch, testBalancer0Integrity-failure.log From build #1849 : {code} REGRESSION: org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity Error Message: Cluster failed to reached expected values of totalSpace (current: 750, expected: 750), or usedSpace (current: 140, expected: 150), in more than 4 msec. Stack Trace: java.util.concurrent.TimeoutException: Cluster failed to reached expected values of totalSpace (current: 750, expected: 750), or usedSpace (current: 140, expected: 150), in more than 4 msec. at org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForHeartBeat(TestBalancer.java:253) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:578) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:551) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:437) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:645) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0Internal(TestBalancer.java:759) at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity(TestBalancerWithSaslDataTransfer.java:34) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs
[ https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-7713: --- Attachment: HDFS-7713.02.patch Hi Haohui! Thanks for the suggestion. Here's an updated patch which only adds the mkdir feature. The patch file itself is deceptively large because it include bootstrap. The changes to explorer.html, explorer.js and hadoop.css are fairly limited (and actually I indented some code properly) so the changes seem larger than they really are. Improve the HDFS Web UI browser to allow creating dirs -- Key: HDFS-7713 URL: https://issues.apache.org/jira/browse/HDFS-7713 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Ravi Prakash Assignee: Ravi Prakash Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch This JIRA is for improving the NN UI (everything except file uploads) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-3689) Add support for variable length block
[ https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317185#comment-14317185 ] Arpit Agarwal commented on HDFS-3689: - Thanks a lot for generating the branch-2 merge patch Jing! Since no existing clients will be affected by this feature +1 on merging to branch-2 this week and fixing any issues as they come up. Add support for variable length block - Key: HDFS-3689 URL: https://issues.apache.org/jira/browse/HDFS-3689 Project: Hadoop HDFS Issue Type: New Feature Components: datanode, hdfs-client, namenode Affects Versions: 3.0.0 Reporter: Suresh Srinivas Assignee: Jing Zhao Fix For: 3.0.0 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch, HDFS-3689.007.patch, HDFS-3689.008.patch, HDFS-3689.008.patch, HDFS-3689.009.patch, HDFS-3689.009.patch, HDFS-3689.010.patch, HDFS-3689.branch-2.patch, editsStored Currently HDFS supports fixed length blocks. Supporting variable length block will allow new use cases and features to be built on top of HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path
[ https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317184#comment-14317184 ] Tsz Wo Nicholas Sze commented on HDFS-6133: --- I have committed this. Thanks, zhaoyunjiong! Make Balancer support exclude specified path Key: HDFS-6133 URL: https://issues.apache.org/jira/browse/HDFS-6133 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover, namenode Reporter: zhaoyunjiong Assignee: zhaoyunjiong Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, HDFS-6133-9.patch, HDFS-6133.patch Currently, run Balancer will destroying Regionserver's data locality. If getBlocks could exclude blocks belongs to files which have specific path prefix, like /hbase, then we can run Balancer without destroying Regionserver's data locality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path
[ https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317200#comment-14317200 ] Yongjun Zhang commented on HDFS-6133: - Thanks Nicholas for committing this. Hi [~zhaoyunjiong], Would you please answer the questions in https://issues.apache.org/jira/browse/HDFS-6133?focusedCommentId=14314368page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14314368 ? Thanks. Make Balancer support exclude specified path Key: HDFS-6133 URL: https://issues.apache.org/jira/browse/HDFS-6133 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover, datanode Reporter: zhaoyunjiong Assignee: zhaoyunjiong Fix For: 2.7.0 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, HDFS-6133-9.patch, HDFS-6133.patch Currently, run Balancer will destroying Regionserver's data locality. If getBlocks could exclude blocks belongs to files which have specific path prefix, like /hbase, then we can run Balancer without destroying Regionserver's data locality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6946) TestBalancerWithSaslDataTransfer fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-6946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317202#comment-14317202 ] Hadoop QA commented on HDFS-6946: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665044/testBalancer0Integrity-failure.log against trunk revision 085b1e2. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9547//console This message is automatically generated. TestBalancerWithSaslDataTransfer fails in trunk --- Key: HDFS-6946 URL: https://issues.apache.org/jira/browse/HDFS-6946 Project: Hadoop HDFS Issue Type: Test Reporter: Ted Yu Assignee: Stephen Chu Priority: Minor Attachments: HDFS-6946.1.patch, testBalancer0Integrity-failure.log From build #1849 : {code} REGRESSION: org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity Error Message: Cluster failed to reached expected values of totalSpace (current: 750, expected: 750), or usedSpace (current: 140, expected: 150), in more than 4 msec. Stack Trace: java.util.concurrent.TimeoutException: Cluster failed to reached expected values of totalSpace (current: 750, expected: 750), or usedSpace (current: 140, expected: 150), in more than 4 msec. at org.apache.hadoop.hdfs.server.balancer.TestBalancer.waitForHeartBeat(TestBalancer.java:253) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.runBalancer(TestBalancer.java:578) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:551) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTest(TestBalancer.java:437) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.oneNodeTest(TestBalancer.java:645) at org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancer0Internal(TestBalancer.java:759) at org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity(TestBalancerWithSaslDataTransfer.java:34) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path
[ https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317234#comment-14317234 ] Hudson commented on HDFS-6133: -- FAILURE: Integrated in Hadoop-trunk-Commit #7080 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7080/]) HDFS-6133. Add a feature for replica pinning so that a pinned replica will not be moved by Balancer/Mover. Contributed by zhaoyunjiong (szetszwo: rev 085b1e293ff53f7a86aa21406cfd4bfa0f3bf33b) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java * hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java * hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java Make Balancer support exclude specified path Key: HDFS-6133 URL: https://issues.apache.org/jira/browse/HDFS-6133 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover, datanode Reporter: zhaoyunjiong Assignee: zhaoyunjiong Fix For: 2.7.0 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, HDFS-6133-9.patch, HDFS-6133.patch Currently, run Balancer will destroying Regionserver's data locality. If getBlocks could exclude blocks belongs to files which have specific path prefix, like /hbase, then we can run Balancer without destroying Regionserver's data locality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HDFS-7662) Erasure Coder API for encoding and decoding of block group
[ https://issues.apache.org/jira/browse/HDFS-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-7662 started by Kai Zheng. --- Erasure Coder API for encoding and decoding of block group -- Key: HDFS-7662 URL: https://issues.apache.org/jira/browse/HDFS-7662 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Fix For: HDFS-EC Attachments: HDFS-7662-v1.patch, HDFS-7662-v2.patch, HDFS-7662-v3.patch This is to define ErasureCoder API for encoding and decoding of BlockGroup. Given a BlockGroup, ErasureCoder extracts data chunks from the blocks and leverages RawErasureCoder defined in HDFS-7353 to perform concrete encoding or decoding. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-3689) Add support for variable length block
[ https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317099#comment-14317099 ] Jing Zhao commented on HDFS-3689: - Yeah, I will post a patch for branch-2 soon. Add support for variable length block - Key: HDFS-3689 URL: https://issues.apache.org/jira/browse/HDFS-3689 Project: Hadoop HDFS Issue Type: New Feature Components: datanode, hdfs-client, namenode Affects Versions: 3.0.0 Reporter: Suresh Srinivas Assignee: Jing Zhao Fix For: 3.0.0 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch, HDFS-3689.007.patch, HDFS-3689.008.patch, HDFS-3689.008.patch, HDFS-3689.009.patch, HDFS-3689.009.patch, HDFS-3689.010.patch, editsStored Currently HDFS supports fixed length blocks. Supporting variable length block will allow new use cases and features to be built on top of HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html
[ https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-7772: - Status: Patch Available (was: Open) Document hdfs balancer -exclude/-include option in HDFSCommands.html Key: HDFS-7772 URL: https://issues.apache.org/jira/browse/HDFS-7772 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Priority: Trivial Attachments: HDFS-7772.0.patch hdfs balancer -exclude/-include option are displayed in the command line help but not HTML documentation page. This JIRA is opened to add it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs
[ https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-7713: --- Status: Patch Available (was: Open) Improve the HDFS Web UI browser to allow creating dirs -- Key: HDFS-7713 URL: https://issues.apache.org/jira/browse/HDFS-7713 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Ravi Prakash Assignee: Ravi Prakash Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch This JIRA is for improving the NN UI (everything except file uploads) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk
[ https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317139#comment-14317139 ] Tsz Wo Nicholas Sze commented on HDFS-4114: --- The blinding the methods called in their implementations is quite minor in my view. It is easy to change methods to public when necessary. I might be wrong -- I have a feeling that keeping BackupNode around does slow down the ConsensusNode development since we have to, again, maintain the BackupNode code during the ConsensusNode development. Won't you agree? Remove the BackupNode and CheckpointNode from trunk --- Key: HDFS-4114 URL: https://issues.apache.org/jira/browse/HDFS-4114 Project: Hadoop HDFS Issue Type: Bug Reporter: Eli Collins Assignee: Tsz Wo Nicholas Sze Attachments: HDFS-4114.000.patch, HDFS-4114.001.patch, HDFS-4114.patch, h4114_20150210.patch Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the BackupNode and CheckpointNode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HDFS-7780) Update use of Iterator to Iterable
[ https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang moved MAPREDUCE-6254 to HDFS-7780: - Key: HDFS-7780 (was: MAPREDUCE-6254) Project: Hadoop HDFS (was: Hadoop Map/Reduce) Update use of Iterator to Iterable -- Key: HDFS-7780 URL: https://issues.apache.org/jira/browse/HDFS-7780 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ray Chiang Assignee: Ray Chiang Priority: Minor Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-7778: Attachment: HDFS-7778-trunk.000.patch I'm reattaching the same trunk patch, so that it gets picked up as the one with the newest timestamp for Jenkins. Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HDFS-7778: Attachment: (was: HDFS-7778-trunk.000.patch) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317219#comment-14317219 ] Hadoop QA commented on HDFS-7778: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12698170/HDFS-7778-trunk.000.patch against trunk revision f80c988. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.web.TestWebHDFSAcl Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9544//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9544//console This message is automatically generated. Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7686) Re-add rapid rescan of possibly corrupt block feature to the block scanner
[ https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317262#comment-14317262 ] Andrew Wang commented on HDFS-7686: --- Thanks for finding this Rushabh, thanks Colin for the patch. A few light review comments: * Unused Iterator import in the test, LoadingCache import in VolumeScanner * I like the cache since it's a nice way of preventing scanning the same blocks over and over again, but it'd be good to also use a LinkedHashMap instead of the LinkedList and also check existence in there before adding. That way we never have dupes in the suspect queue. It seems possible to have a working set bigger than the 1000 element cache size, like if an entire disk goes bad. Otherwise looks good! Re-add rapid rescan of possibly corrupt block feature to the block scanner -- Key: HDFS-7686 URL: https://issues.apache.org/jira/browse/HDFS-7686 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0 Reporter: Rushabh S Shah Assignee: Colin Patrick McCabe Priority: Blocker Attachments: HDFS-7686.002.patch When doing a transferTo (aka sendfile operation) from the DataNode to a client, we may hit an I/O error from the disk. If we believe this is the case, we should be able to tell the block scanner to rescan that block soon. The feature was originally implemented in HDFS-7548 but was removed by HDFS-7430. We should re-add it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HDFS-7664) Reed-Solomon ErasureCoder
[ https://issues.apache.org/jira/browse/HDFS-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-7664 started by Kai Zheng. --- Reed-Solomon ErasureCoder - Key: HDFS-7664 URL: https://issues.apache.org/jira/browse/HDFS-7664 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng This is to implement Reed-Solomon ErasureCoder using the API defined in HDFS-7662. It supports to plugin via configuration for concrete RawErasureCoder, using either JRSErasureCoder added in HDFS-7418 or IsaRSErasureCoder added in HDFS-7338. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk
[ https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317093#comment-14317093 ] Konstantin Shvachko commented on HDFS-4114: --- Sorry, but I respectfully disagree with this order. The bindings I referred to in the comment cited above include both the method signatures, and the methods called in their implementations. We did try to introduce ConsensusNode as a replacement for BackupNode via HDFS-6469 and HADOOP-10641 last summer. The initial approach HDFS-6940 was rejected, but no alternatives were worked out, which is the purpose of HDFS-7007 now. Not trying to diverge from the BackupNode topic. But these two issue are tightly related and important for our common customers. BTW, the findbugs warning in {{BackupImage}} is now fixed by HDFS-7753. Remove the BackupNode and CheckpointNode from trunk --- Key: HDFS-4114 URL: https://issues.apache.org/jira/browse/HDFS-4114 Project: Hadoop HDFS Issue Type: Bug Reporter: Eli Collins Assignee: Tsz Wo Nicholas Sze Attachments: HDFS-4114.000.patch, HDFS-4114.001.patch, HDFS-4114.patch, h4114_20150210.patch Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the BackupNode and CheckpointNode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7769) TestHDFSCLI create files in hdfs project root dir
[ https://issues.apache.org/jira/browse/HDFS-7769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317130#comment-14317130 ] Tsz Wo Nicholas Sze commented on HDFS-7769: --- This should have been reviewed by a committer. [~shv], as discussed in HADOOP-8248, the bylaws is not very clear about whether a simple patch can be reviewed by a non-committer and then a committer, who may also be the contributor, commits it. Do you agree that this is a simple patch in the first place? TestHDFSCLI create files in hdfs project root dir - Key: HDFS-7769 URL: https://issues.apache.org/jira/browse/HDFS-7769 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Tsz Wo Nicholas Sze Assignee: Tsz Wo Nicholas Sze Priority: Trivial Fix For: 2.7.0 Attachments: h7769_20150210.patch, h7769_20150210b.patch After running TestHDFSCLI, two files (data and .data.crc) remain in hdfs project root dir. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7686) Re-add rapid rescan of possibly corrupt block feature to the block scanner
[ https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-7686: --- Summary: Re-add rapid rescan of possibly corrupt block feature to the block scanner (was: Corrupt block reporting to namenode soon feature is overwritten by HDFS-7430) Re-add rapid rescan of possibly corrupt block feature to the block scanner -- Key: HDFS-7686 URL: https://issues.apache.org/jira/browse/HDFS-7686 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0 Reporter: Rushabh S Shah Assignee: Colin Patrick McCabe Priority: Blocker Attachments: HDFS-7686.002.patch The feature implemented in HDFS-7548 is removed by HDFS-7430. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7686) Re-add rapid rescan of possibly corrupt block feature to the block scanner
[ https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-7686: --- Description: When doing a transferTo (aka sendfile operation) from the DataNode to a client, we may hit an I/O error from the disk. If we believe this is the case, we should be able to tell the block scanner to rescan that block soon. The feature was originally implemented in HDFS-7548 but was removed by HDFS-7430. We should re-add it. (was: The feature implemented in HDFS-7548 is removed by HDFS-7430. ) Re-add rapid rescan of possibly corrupt block feature to the block scanner -- Key: HDFS-7686 URL: https://issues.apache.org/jira/browse/HDFS-7686 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0 Reporter: Rushabh S Shah Assignee: Colin Patrick McCabe Priority: Blocker Attachments: HDFS-7686.002.patch When doing a transferTo (aka sendfile operation) from the DataNode to a client, we may hit an I/O error from the disk. If we believe this is the case, we should be able to tell the block scanner to rescan that block soon. The feature was originally implemented in HDFS-7548 but was removed by HDFS-7430. We should re-add it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager
[ https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317210#comment-14317210 ] Tsz Wo Nicholas Sze commented on HDFS-7411: --- ... would you be OK changing the default so this uses the new algorithm in clusters where the node limit is not explicitly configured (default value for nodes is Integer.MAX_VALUE)? Agree. This is the same as [my propose|https://issues.apache.org/jira/browse/HDFS-7411?focusedCommentId=14302030page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14302030] mentioned multiple times. The default value could be removed from hdfs-default.xml. Then then passing -1 as default in the code. Then, returning -1 means the conf is not set. Refactor and improve decommissioning logic into DecommissionManager --- Key: HDFS-7411 URL: https://issues.apache.org/jira/browse/HDFS-7411 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.5.1 Reporter: Andrew Wang Assignee: Andrew Wang Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, hdfs-7411.009.patch, hdfs-7411.010.patch Would be nice to split out decommission logic from DatanodeManager to DecommissionManager. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html
[ https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317241#comment-14317241 ] Allen Wittenauer commented on HDFS-7772: bq. sperated separated It might also render weird in apt due to being a long line. I'll have to check that yet. Document hdfs balancer -exclude/-include option in HDFSCommands.html Key: HDFS-7772 URL: https://issues.apache.org/jira/browse/HDFS-7772 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Priority: Trivial Attachments: HDFS-7772.0.patch hdfs balancer -exclude/-include option are displayed in the command line help but not HTML documentation page. This JIRA is opened to add it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HDFS-7083) TestDecommission#testIncludeByRegistrationName sometimes fails
[ https://issues.apache.org/jira/browse/HDFS-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved HDFS-7083. -- Resolution: Cannot Reproduce TestDecommission#testIncludeByRegistrationName sometimes fails -- Key: HDFS-7083 URL: https://issues.apache.org/jira/browse/HDFS-7083 Project: Hadoop HDFS Issue Type: Test Reporter: Ted Yu Priority: Minor From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1874/ : {code} REGRESSION: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName Error Message: test timed out after 36 milliseconds Stack Trace: java.lang.Exception: test timed out after 36 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7686) Corrupt block reporting to namenode soon feature is overwritten by HDFS-7430
[ https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-7686: --- Status: Patch Available (was: Open) Corrupt block reporting to namenode soon feature is overwritten by HDFS-7430 --- Key: HDFS-7686 URL: https://issues.apache.org/jira/browse/HDFS-7686 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0 Reporter: Rushabh S Shah Assignee: Colin Patrick McCabe Priority: Blocker Attachments: HDFS-7686.002.patch The feature implemented in HDFS-7548 is removed by HDFS-7430. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7686) Corrupt block reporting to namenode soon feature is overwritten by HDFS-7430
[ https://issues.apache.org/jira/browse/HDFS-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HDFS-7686: --- Attachment: HDFS-7686.002.patch Corrupt block reporting to namenode soon feature is overwritten by HDFS-7430 --- Key: HDFS-7686 URL: https://issues.apache.org/jira/browse/HDFS-7686 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 3.0.0 Reporter: Rushabh S Shah Assignee: Colin Patrick McCabe Priority: Blocker Attachments: HDFS-7686.002.patch The feature implemented in HDFS-7548 is removed by HDFS-7430. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-3689) Add support for variable length block
[ https://issues.apache.org/jira/browse/HDFS-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-3689: Attachment: HDFS-3689.branch-2.patch Post the patch for branch-2. So far I have not found any place where the functionality is broken by the variable length block. Maybe we should merge this to branch-2 this week? Note that the variable length block will not be generated unless the user explicitly passes in the {{CreateFlag#NEW_BLOCK}} while creating the file. Also if we find anything is broken by this feature we can fix them in separate jiras. Add support for variable length block - Key: HDFS-3689 URL: https://issues.apache.org/jira/browse/HDFS-3689 Project: Hadoop HDFS Issue Type: New Feature Components: datanode, hdfs-client, namenode Affects Versions: 3.0.0 Reporter: Suresh Srinivas Assignee: Jing Zhao Fix For: 3.0.0 Attachments: HDFS-3689.000.patch, HDFS-3689.001.patch, HDFS-3689.002.patch, HDFS-3689.003.patch, HDFS-3689.003.patch, HDFS-3689.004.patch, HDFS-3689.005.patch, HDFS-3689.006.patch, HDFS-3689.007.patch, HDFS-3689.008.patch, HDFS-3689.008.patch, HDFS-3689.009.patch, HDFS-3689.009.patch, HDFS-3689.010.patch, HDFS-3689.branch-2.patch, editsStored Currently HDFS supports fixed length blocks. Supporting variable length block will allow new use cases and features to be built on top of HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path
[ https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-6133: -- Resolution: Fixed Fix Version/s: 2.7.0 Status: Resolved (was: Patch Available) Make Balancer support exclude specified path Key: HDFS-6133 URL: https://issues.apache.org/jira/browse/HDFS-6133 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover, datanode Reporter: zhaoyunjiong Assignee: zhaoyunjiong Fix For: 2.7.0 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, HDFS-6133-9.patch, HDFS-6133.patch Currently, run Balancer will destroying Regionserver's data locality. If getBlocks could exclude blocks belongs to files which have specific path prefix, like /hbase, then we can run Balancer without destroying Regionserver's data locality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path
[ https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-6133: -- Component/s: (was: namenode) datanode Make Balancer support exclude specified path Key: HDFS-6133 URL: https://issues.apache.org/jira/browse/HDFS-6133 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover, datanode Reporter: zhaoyunjiong Assignee: zhaoyunjiong Fix For: 2.7.0 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, HDFS-6133-9.patch, HDFS-6133.patch Currently, run Balancer will destroying Regionserver's data locality. If getBlocks could exclude blocks belongs to files which have specific path prefix, like /hbase, then we can run Balancer without destroying Regionserver's data locality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7778) Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2
[ https://issues.apache.org/jira/browse/HDFS-7778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317212#comment-14317212 ] Chris Nauroth commented on HDFS-7778: - Hi [~eddyxu]. I think I'm fully caught up now on why there are differences between trunk and branch-2. The branch-2 patch you posted is pulling in a piece of HDFS-7430. That one was committed to trunk only. I see a comment in there that the intention was to let it bake in trunk for a week or two before merging to branch-2. I'm going to ask now if it's time to proceed with that merge. I'd like to suggest that we put this jira on hold while we figure out the timing of the HDFS-7430 merge. Then, I expect you'll be able to put together test-only patches (maybe even the same patch for trunk and branch-2) to do the test suite renaming. After that, I'd proceed with rebasing HDFS-7604, again hopefully without requiring separate patches for each branch. I'd prefer not to commit the current branch-2 patch posted here, because pulling in just a small piece of HDFS-7430 would create a confusing situation later for the full merge of HDFS-7430. Thanks for your help on this! Rename FsVolumeListTest to TestFsVolumeList and commit it to branch-2 - Key: HDFS-7778 URL: https://issues.apache.org/jira/browse/HDFS-7778 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 2.6.0 Reporter: Lei (Eddy) Xu Assignee: Lei (Eddy) Xu Attachments: HDFS-7778-branch2.000.patch, HDFS-7778-trunk.000.patch HDFS-7496 mistakenly named {{FsVolumeListTest}}, which causes it out of jenkin tests. Also it mistakenly removed it from branch-2 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-4114) Remove the BackupNode and CheckpointNode from trunk
[ https://issues.apache.org/jira/browse/HDFS-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317268#comment-14317268 ] Konstantin Shvachko commented on HDFS-4114: --- Change methods to public proved to be *not easy* with HDFS-6940. Sounds like you missed it, check it out - this is exactly what it was doing. I do not see BackupNode dragging CNode development. It did not for my current implementation. Remove the BackupNode and CheckpointNode from trunk --- Key: HDFS-4114 URL: https://issues.apache.org/jira/browse/HDFS-4114 Project: Hadoop HDFS Issue Type: Bug Reporter: Eli Collins Assignee: Tsz Wo Nicholas Sze Attachments: HDFS-4114.000.patch, HDFS-4114.001.patch, HDFS-4114.patch, h4114_20150210.patch Per the thread on hdfs-dev@ (http://s.apache.org/tMT) let's remove the BackupNode and CheckpointNode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-7736) Fix typos in dfsadmin/fsck/snapshotDiff usage messages
[ https://issues.apache.org/jira/browse/HDFS-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317274#comment-14317274 ] Akira AJISAKA commented on HDFS-7736: - Thanks [~wheat9] for commit! Fix typos in dfsadmin/fsck/snapshotDiff usage messages -- Key: HDFS-7736 URL: https://issues.apache.org/jira/browse/HDFS-7736 Project: Hadoop HDFS Issue Type: Bug Components: tools Affects Versions: 2.6.0 Reporter: Archana T Assignee: Brahma Reddy Battula Priority: Minor Attachments: HDFS-7736-002.patch, HDFS-7736-003.patch, HDFS-7736-004.patch, HDFS-7736-branch-2-001.patch, HDFS-7736-branch-2-002.patch, HDFS-7736-branch2-003.patch, HDFS-7736.patch Scenario -- Try the following hdfs commands -- Scenario -- Try the following hdfs commands -- 1. # ./hdfs dfsadmin -getStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-getStoragePolicy path] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-getStoragePolicy path] 2. # ./hdfs dfsadmin -setStoragePolicy Usage:*{color:red} java DFSAdmin {color}*[-setStoragePolicy path policyName] Expected- Usage:*{color:green} hdfs dfsadmin {color}*[-setStoragePolicy path policyName] 3. # ./hdfs fsck Usage:*{color:red} DFSck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks Expected- Usage:*{color:green} hdfs fsck path {color}*[-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks 4. # ./hdfs snapshotDiff Usage: *{color:red}SnapshotDiff{color}* snapshotDir from to: Expected- Usage: *{color:green}snapshotDiff{color}* snapshotDir from to: -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html
[ https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-7772: - Attachment: HDFS-7772.0.patch Attach a patch that documents the -exclude/-include options of the balancer in HDFSCommands.html#balancer. Document hdfs balancer -exclude/-include option in HDFSCommands.html Key: HDFS-7772 URL: https://issues.apache.org/jira/browse/HDFS-7772 Project: Hadoop HDFS Issue Type: Improvement Components: documentation Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Priority: Trivial Attachments: HDFS-7772.0.patch hdfs balancer -exclude/-include option are displayed in the command line help but not HTML documentation page. This JIRA is opened to add it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs
[ https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravi Prakash updated HDFS-7713: --- Summary: Improve the HDFS Web UI browser to allow creating dirs (was: Improve the HDFS Web UI browser to allow chowning / chmoding, creating dirs, and setting replication) Improve the HDFS Web UI browser to allow creating dirs -- Key: HDFS-7713 URL: https://issues.apache.org/jira/browse/HDFS-7713 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Ravi Prakash Assignee: Ravi Prakash Attachments: HDFS-7713.01.patch This JIRA is for improving the NN UI (everything except file uploads) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7780) Update use of Iterator to Iterable
[ https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HDFS-7780: - Attachment: HDFS-7780.001.patch Update use of Iterator to Iterable -- Key: HDFS-7780 URL: https://issues.apache.org/jira/browse/HDFS-7780 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ray Chiang Assignee: Ray Chiang Priority: Minor Attachments: HDFS-7780.001.patch Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-7780) Update use of Iterator to Iterable
[ https://issues.apache.org/jira/browse/HDFS-7780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HDFS-7780: - Status: Patch Available (was: Open) Submit for testing Update use of Iterator to Iterable -- Key: HDFS-7780 URL: https://issues.apache.org/jira/browse/HDFS-7780 Project: Hadoop HDFS Issue Type: Improvement Reporter: Ray Chiang Assignee: Ray Chiang Priority: Minor Attachments: HDFS-7780.001.patch Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path
[ https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-6133: -- Release Note: Add a feature for replica pinning so that a pinned replica will not be moved by Balancer/Mover. The replica pinning feature can be enabled/disabled by dfs.datanode.block-pinning.enabled, where the default is false. Make Balancer support exclude specified path Key: HDFS-6133 URL: https://issues.apache.org/jira/browse/HDFS-6133 Project: Hadoop HDFS Issue Type: Improvement Components: balancer mover, datanode Reporter: zhaoyunjiong Assignee: zhaoyunjiong Fix For: 2.7.0 Attachments: HDFS-6133-1.patch, HDFS-6133-10.patch, HDFS-6133-11.patch, HDFS-6133-2.patch, HDFS-6133-3.patch, HDFS-6133-4.patch, HDFS-6133-5.patch, HDFS-6133-6.patch, HDFS-6133-7.patch, HDFS-6133-8.patch, HDFS-6133-9.patch, HDFS-6133.patch Currently, run Balancer will destroying Regionserver's data locality. If getBlocks could exclude blocks belongs to files which have specific path prefix, like /hbase, then we can run Balancer without destroying Regionserver's data locality. -- This message was sent by Atlassian JIRA (v6.3.4#6332)