[jira] [Commented] (HDFS-6222) Remove background token renewer from webhdfs
[ https://issues.apache.org/jira/browse/HDFS-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039624#comment-14039624 ] Hudson commented on HDFS-6222: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5747 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5747/]) HDFS-6222. Remove background token renewer from webhdfs. Contributed by Rushabh Shah and Daryn Sharp. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604300) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java Remove background token renewer from webhdfs Key: HDFS-6222 URL: https://issues.apache.org/jira/browse/HDFS-6222 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6222.branch-2-v2.patch, HDFS-6222.branch-2-v3.patch, HDFS-6222.branch-2.patch, HDFS-6222.branch-2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v3.patch, HDFS-6222.trunk.patch, HDFS-6222.trunk.patch The background token renewer is a source of problems for long-running daemons. Webhdfs should lazy fetch a new token when it receives an InvalidToken exception. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6535) HDFS quota update is wrong when file is appended
[ https://issues.apache.org/jira/browse/HDFS-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039777#comment-14039777 ] Hudson commented on HDFS-6535: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #590 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/590/]) HDFS-6535. HDFS quota update is wrong when file is appended. Contributed by George Wong. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604226) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java HDFS quota update is wrong when file is appended Key: HDFS-6535 URL: https://issues.apache.org/jira/browse/HDFS-6535 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.4.0 Reporter: George Wong Assignee: George Wong Fix For: 2.5.0 Attachments: HDFS-6535.patch, HDFS-6535_v1.patch, TestHDFSQuota.java when a file in the directory with Quota feature is appended, the cached disk consumption should be updated. But currently, the update is wrong. Use the uploaded UT to reproduce this bug. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6222) Remove background token renewer from webhdfs
[ https://issues.apache.org/jira/browse/HDFS-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039779#comment-14039779 ] Hudson commented on HDFS-6222: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #590 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/590/]) HDFS-6222. Remove background token renewer from webhdfs. Contributed by Rushabh Shah and Daryn Sharp. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604300) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java Remove background token renewer from webhdfs Key: HDFS-6222 URL: https://issues.apache.org/jira/browse/HDFS-6222 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6222.branch-2-v2.patch, HDFS-6222.branch-2-v3.patch, HDFS-6222.branch-2.patch, HDFS-6222.branch-2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v3.patch, HDFS-6222.trunk.patch, HDFS-6222.trunk.patch The background token renewer is a source of problems for long-running daemons. Webhdfs should lazy fetch a new token when it receives an InvalidToken exception. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6557) Move the reference of fsimage to FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039782#comment-14039782 ] Hudson commented on HDFS-6557: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #590 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/590/]) HDFS-6557. Move the reference of fsimage to FSNamesystem. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604242) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java Move the reference of fsimage to FSNamesystem - Key: HDFS-6557 URL: https://issues.apache.org/jira/browse/HDFS-6557 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Fix For: 2.5.0 Attachments: HDFS-6557.000.patch, HDFS-6557.001.patch Per the suggestion from HDFS-6480. {{FSDirectory}} becomes a in-memory data structure so that the reference of fsimage should be moved to {{FSNamesystem}}. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6222) Remove background token renewer from webhdfs
[ https://issues.apache.org/jira/browse/HDFS-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039839#comment-14039839 ] Hudson commented on HDFS-6222: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1781 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1781/]) HDFS-6222. Remove background token renewer from webhdfs. Contributed by Rushabh Shah and Daryn Sharp. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604300) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java Remove background token renewer from webhdfs Key: HDFS-6222 URL: https://issues.apache.org/jira/browse/HDFS-6222 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6222.branch-2-v2.patch, HDFS-6222.branch-2-v3.patch, HDFS-6222.branch-2.patch, HDFS-6222.branch-2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v3.patch, HDFS-6222.trunk.patch, HDFS-6222.trunk.patch The background token renewer is a source of problems for long-running daemons. Webhdfs should lazy fetch a new token when it receives an InvalidToken exception. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6557) Move the reference of fsimage to FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039842#comment-14039842 ] Hudson commented on HDFS-6557: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1781 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1781/]) HDFS-6557. Move the reference of fsimage to FSNamesystem. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604242) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java Move the reference of fsimage to FSNamesystem - Key: HDFS-6557 URL: https://issues.apache.org/jira/browse/HDFS-6557 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Fix For: 2.5.0 Attachments: HDFS-6557.000.patch, HDFS-6557.001.patch Per the suggestion from HDFS-6480. {{FSDirectory}} becomes a in-memory data structure so that the reference of fsimage should be moved to {{FSNamesystem}}. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6535) HDFS quota update is wrong when file is appended
[ https://issues.apache.org/jira/browse/HDFS-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039837#comment-14039837 ] Hudson commented on HDFS-6535: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1781 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1781/]) HDFS-6535. HDFS quota update is wrong when file is appended. Contributed by George Wong. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604226) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java HDFS quota update is wrong when file is appended Key: HDFS-6535 URL: https://issues.apache.org/jira/browse/HDFS-6535 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.4.0 Reporter: George Wong Assignee: George Wong Fix For: 2.5.0 Attachments: HDFS-6535.patch, HDFS-6535_v1.patch, TestHDFSQuota.java when a file in the directory with Quota feature is appended, the cached disk consumption should be updated. But currently, the update is wrong. Use the uploaded UT to reproduce this bug. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6222) Remove background token renewer from webhdfs
[ https://issues.apache.org/jira/browse/HDFS-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039863#comment-14039863 ] Hudson commented on HDFS-6222: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1808 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1808/]) HDFS-6222. Remove background token renewer from webhdfs. Contributed by Rushabh Shah and Daryn Sharp. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604300) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java Remove background token renewer from webhdfs Key: HDFS-6222 URL: https://issues.apache.org/jira/browse/HDFS-6222 Project: Hadoop HDFS Issue Type: Bug Components: webhdfs Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6222.branch-2-v2.patch, HDFS-6222.branch-2-v3.patch, HDFS-6222.branch-2.patch, HDFS-6222.branch-2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v3.patch, HDFS-6222.trunk.patch, HDFS-6222.trunk.patch The background token renewer is a source of problems for long-running daemons. Webhdfs should lazy fetch a new token when it receives an InvalidToken exception. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6535) HDFS quota update is wrong when file is appended
[ https://issues.apache.org/jira/browse/HDFS-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039861#comment-14039861 ] Hudson commented on HDFS-6535: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1808 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1808/]) HDFS-6535. HDFS quota update is wrong when file is appended. Contributed by George Wong. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604226) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java HDFS quota update is wrong when file is appended Key: HDFS-6535 URL: https://issues.apache.org/jira/browse/HDFS-6535 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.4.0 Reporter: George Wong Assignee: George Wong Fix For: 2.5.0 Attachments: HDFS-6535.patch, HDFS-6535_v1.patch, TestHDFSQuota.java when a file in the directory with Quota feature is appended, the cached disk consumption should be updated. But currently, the update is wrong. Use the uploaded UT to reproduce this bug. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6557) Move the reference of fsimage to FSNamesystem
[ https://issues.apache.org/jira/browse/HDFS-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039866#comment-14039866 ] Hudson commented on HDFS-6557: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1808 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1808/]) HDFS-6557. Move the reference of fsimage to FSNamesystem. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604242) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java Move the reference of fsimage to FSNamesystem - Key: HDFS-6557 URL: https://issues.apache.org/jira/browse/HDFS-6557 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Fix For: 2.5.0 Attachments: HDFS-6557.000.patch, HDFS-6557.001.patch Per the suggestion from HDFS-6480. {{FSDirectory}} becomes a in-memory data structure so that the reference of fsimage should be moved to {{FSNamesystem}}. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-4667) Capture renamed files/directories in snapshot diff report
[ https://issues.apache.org/jira/browse/HDFS-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039985#comment-14039985 ] Hudson commented on HDFS-4667: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5750 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5750/]) HDFS-4667. Capture renamed files/directories in snapshot diff report. Contributed by Jing Zhao and Binglin Chang. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604488) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsSnapshots.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java Capture renamed files/directories in snapshot diff report - Key: HDFS-4667 URL: https://issues.apache.org/jira/browse/HDFS-4667 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Jing Zhao Assignee: Jing Zhao Fix For: 2.5.0 Attachments: HDFS-4667.002.patch, HDFS-4667.002.patch, HDFS-4667.003.patch, HDFS-4667.004.patch, HDFS-4667.demo.patch, HDFS-4667.v1.patch, getfullname-snapshot-support.patch Currently in the diff report we only show file/dir creation, deletion and modification. After rename with snapshots is supported, renamed file/dir should also be captured in the diff report. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6583) Remove clientNode in FileUnderConstructionFeature
[ https://issues.apache.org/jira/browse/HDFS-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040058#comment-14040058 ] Hudson commented on HDFS-6583: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5751 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5751/]) HDFS-6583. Remove clientNode in FileUnderConstructionFeature. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604541) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java Remove clientNode in FileUnderConstructionFeature - Key: HDFS-6583 URL: https://issues.apache.org/jira/browse/HDFS-6583 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6583.000.patch, HDFS-6583.001.patch {{FileUnderConstructionFeature}} contains two fields {{clientMachine}} and {{clientNode}}. {{clientNode}} keeps a reference of a {{DatanodeDescriptor}}. The reference be recomputed by consulting {{DatanodeManager}}. This jira proposes to remove {{clientNode}} in the {{FileUnderConstructionFeature}} to simplify the code and to reduce overhead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6583) Remove clientNode in FileUnderConstructionFeature
[ https://issues.apache.org/jira/browse/HDFS-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040107#comment-14040107 ] Hudson commented on HDFS-6583: -- FAILURE: Integrated in Hadoop-Yarn-trunk #591 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/591/]) HDFS-6583. Remove clientNode in FileUnderConstructionFeature. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604541) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java Remove clientNode in FileUnderConstructionFeature - Key: HDFS-6583 URL: https://issues.apache.org/jira/browse/HDFS-6583 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6583.000.patch, HDFS-6583.001.patch {{FileUnderConstructionFeature}} contains two fields {{clientMachine}} and {{clientNode}}. {{clientNode}} keeps a reference of a {{DatanodeDescriptor}}. The reference be recomputed by consulting {{DatanodeManager}}. This jira proposes to remove {{clientNode}} in the {{FileUnderConstructionFeature}} to simplify the code and to reduce overhead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-4667) Capture renamed files/directories in snapshot diff report
[ https://issues.apache.org/jira/browse/HDFS-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040108#comment-14040108 ] Hudson commented on HDFS-4667: -- FAILURE: Integrated in Hadoop-Yarn-trunk #591 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/591/]) HDFS-4667. Capture renamed files/directories in snapshot diff report. Contributed by Jing Zhao and Binglin Chang. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604488) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsSnapshots.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java Capture renamed files/directories in snapshot diff report - Key: HDFS-4667 URL: https://issues.apache.org/jira/browse/HDFS-4667 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Jing Zhao Assignee: Jing Zhao Fix For: 2.5.0 Attachments: HDFS-4667.002.patch, HDFS-4667.002.patch, HDFS-4667.003.patch, HDFS-4667.004.patch, HDFS-4667.demo.patch, HDFS-4667.v1.patch, getfullname-snapshot-support.patch Currently in the diff report we only show file/dir creation, deletion and modification. After rename with snapshots is supported, renamed file/dir should also be captured in the diff report. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-4667) Capture renamed files/directories in snapshot diff report
[ https://issues.apache.org/jira/browse/HDFS-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040132#comment-14040132 ] Hudson commented on HDFS-4667: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1809 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1809/]) HDFS-4667. Capture renamed files/directories in snapshot diff report. Contributed by Jing Zhao and Binglin Chang. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604488) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsSnapshots.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java Capture renamed files/directories in snapshot diff report - Key: HDFS-4667 URL: https://issues.apache.org/jira/browse/HDFS-4667 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Jing Zhao Assignee: Jing Zhao Fix For: 2.5.0 Attachments: HDFS-4667.002.patch, HDFS-4667.002.patch, HDFS-4667.003.patch, HDFS-4667.004.patch, HDFS-4667.demo.patch, HDFS-4667.v1.patch, getfullname-snapshot-support.patch Currently in the diff report we only show file/dir creation, deletion and modification. After rename with snapshots is supported, renamed file/dir should also be captured in the diff report. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6583) Remove clientNode in FileUnderConstructionFeature
[ https://issues.apache.org/jira/browse/HDFS-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040131#comment-14040131 ] Hudson commented on HDFS-6583: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1809 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1809/]) HDFS-6583. Remove clientNode in FileUnderConstructionFeature. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604541) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java Remove clientNode in FileUnderConstructionFeature - Key: HDFS-6583 URL: https://issues.apache.org/jira/browse/HDFS-6583 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6583.000.patch, HDFS-6583.001.patch {{FileUnderConstructionFeature}} contains two fields {{clientMachine}} and {{clientNode}}. {{clientNode}} keeps a reference of a {{DatanodeDescriptor}}. The reference be recomputed by consulting {{DatanodeManager}}. This jira proposes to remove {{clientNode}} in the {{FileUnderConstructionFeature}} to simplify the code and to reduce overhead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6583) Remove clientNode in FileUnderConstructionFeature
[ https://issues.apache.org/jira/browse/HDFS-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040133#comment-14040133 ] Hudson commented on HDFS-6583: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1782 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1782/]) HDFS-6583. Remove clientNode in FileUnderConstructionFeature. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604541) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileUnderConstructionFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java Remove clientNode in FileUnderConstructionFeature - Key: HDFS-6583 URL: https://issues.apache.org/jira/browse/HDFS-6583 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6583.000.patch, HDFS-6583.001.patch {{FileUnderConstructionFeature}} contains two fields {{clientMachine}} and {{clientNode}}. {{clientNode}} keeps a reference of a {{DatanodeDescriptor}}. The reference be recomputed by consulting {{DatanodeManager}}. This jira proposes to remove {{clientNode}} in the {{FileUnderConstructionFeature}} to simplify the code and to reduce overhead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-4667) Capture renamed files/directories in snapshot diff report
[ https://issues.apache.org/jira/browse/HDFS-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040134#comment-14040134 ] Hudson commented on HDFS-4667: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1782 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1782/]) HDFS-4667. Capture renamed files/directories in snapshot diff report. Contributed by Jing Zhao and Binglin Chang. (jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604488) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsSnapshots.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java Capture renamed files/directories in snapshot diff report - Key: HDFS-4667 URL: https://issues.apache.org/jira/browse/HDFS-4667 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Jing Zhao Assignee: Jing Zhao Fix For: 2.5.0 Attachments: HDFS-4667.002.patch, HDFS-4667.002.patch, HDFS-4667.003.patch, HDFS-4667.004.patch, HDFS-4667.demo.patch, HDFS-4667.v1.patch, getfullname-snapshot-support.patch Currently in the diff report we only show file/dir creation, deletion and modification. After rename with snapshots is supported, renamed file/dir should also be captured in the diff report. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6507) Improve DFSAdmin to support HA cluster better
[ https://issues.apache.org/jira/browse/HDFS-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040438#comment-14040438 ] Hudson commented on HDFS-6507: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5752 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5752/]) HDFS-6507. Improve DFSAdmin to support HA cluster better. (Contributd by Zesheng Wu) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604692) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml Improve DFSAdmin to support HA cluster better - Key: HDFS-6507 URL: https://issues.apache.org/jira/browse/HDFS-6507 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.4.0 Reporter: Zesheng Wu Assignee: Zesheng Wu Fix For: 2.5.0 Attachments: HDFS-6507.1.patch, HDFS-6507.2.patch, HDFS-6507.3.patch, HDFS-6507.4-inprogress.patch, HDFS-6507.4.patch, HDFS-6507.5.patch, HDFS-6507.6.patch, HDFS-6507.7.patch, HDFS-6507.7.patch, HDFS-6507.8.patch Currently, the commands supported in DFSAdmin can be classified into three categories according to the protocol used: 1. ClientProtocol Commands in this category generally implement by calling the corresponding function of the DFSClient class, and will call the corresponding remote implementation function at the NN side finally. At the NN side, all these operations are classified into five categories: UNCHECKED, READ, WRITE, CHECKPOINT, JOURNAL. Active NN will allow all operations, and Standby NN only allows UNCHECKED operations. In the current implementation of DFSClient, it will connect one NN first, if the first NN is not Active and the operation is not allowed, it will failover to the second NN. So here comes the problem, some of the commands(setSafeMode, saveNameSpace, restoreFailedStorage, refreshNodes, setBalancerBandwidth, metaSave) in DFSAdmin are classified as UNCHECKED operations, and when executing these commands in the DFSAdmin command line, they will be sent to a definite NN, no matter it is Active or Standby. This may result in two problems: a. If the first tried NN is standby, and the operation takes effect only on Standby NN, which is not the expected result. b. If the operation needs to take effect on both NN, but it takes effect on only one NN. In the future, when there is a NN failover, there may have problems. Here I propose the following improvements: a. If the command can be classified as one of READ/WRITE/CHECKPOINT/JOURNAL operations, we should classify it clearly. b. If the command can not be classified as one of the above four operations, or if the command needs to take effect on both NN, we should send the request to both Active and Standby NNs. 2. Refresh protocols: RefreshAuthorizationPolicyProtocol, RefreshUserMappingsProtocol, RefreshUserMappingsProtocol, RefreshCallQueueProtocol Commands in this category, including refreshServiceAcl, refreshUserToGroupMapping, refreshSuperUserGroupsConfiguration and refreshCallQueue, are implemented by creating a corresponding RPC proxy and sending the request to remote NN. In the current implementation, these requests will be sent to a definite NN, no matter it is Active or Standby. Here I propose that we sent these requests to both NNs. 3. ClientDatanodeProtocol Commands in this category are handled correctly, no need to improve. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6580) FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper
[ https://issues.apache.org/jira/browse/HDFS-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040483#comment-14040483 ] Hudson commented on HDFS-6580: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5753 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5753/]) HDFS-6580. FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper. Contributed bu Zhilei Xu. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604704) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper - Key: HDFS-6580 URL: https://issues.apache.org/jira/browse/HDFS-6580 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Zhilei Xu Assignee: Zhilei Xu Labels: patch Fix For: 2.5.0 Attachments: patch_c89bff2bb7a06bb2b0c66a85acbd5113db6b0526.txt In FSNamesystem.java, getAuditFileInfo() is the canonical way to get file info for auditing purpose. getAuditFileInfo() returns null when auditing is disabled, and calls dir.getFileInfo() when auditing is enabled. One internal APIs, mkdirsInt() mistakenly use the raw dir.getFileInfo() to get file info for auditing. Should change to getAuditFileInfo(). Note that another internal API, startFileInt() uses dir.getFileInfo() correctly, because the returned file stat is returned out. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6507) Improve DFSAdmin to support HA cluster better
[ https://issues.apache.org/jira/browse/HDFS-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040644#comment-14040644 ] Hudson commented on HDFS-6507: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #592 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/592/]) HDFS-6507. Improve DFSAdmin to support HA cluster better. (Contributd by Zesheng Wu) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604692) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml Improve DFSAdmin to support HA cluster better - Key: HDFS-6507 URL: https://issues.apache.org/jira/browse/HDFS-6507 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.4.0 Reporter: Zesheng Wu Assignee: Zesheng Wu Fix For: 2.5.0 Attachments: HDFS-6507.1.patch, HDFS-6507.2.patch, HDFS-6507.3.patch, HDFS-6507.4-inprogress.patch, HDFS-6507.4.patch, HDFS-6507.5.patch, HDFS-6507.6.patch, HDFS-6507.7.patch, HDFS-6507.7.patch, HDFS-6507.8.patch Currently, the commands supported in DFSAdmin can be classified into three categories according to the protocol used: 1. ClientProtocol Commands in this category generally implement by calling the corresponding function of the DFSClient class, and will call the corresponding remote implementation function at the NN side finally. At the NN side, all these operations are classified into five categories: UNCHECKED, READ, WRITE, CHECKPOINT, JOURNAL. Active NN will allow all operations, and Standby NN only allows UNCHECKED operations. In the current implementation of DFSClient, it will connect one NN first, if the first NN is not Active and the operation is not allowed, it will failover to the second NN. So here comes the problem, some of the commands(setSafeMode, saveNameSpace, restoreFailedStorage, refreshNodes, setBalancerBandwidth, metaSave) in DFSAdmin are classified as UNCHECKED operations, and when executing these commands in the DFSAdmin command line, they will be sent to a definite NN, no matter it is Active or Standby. This may result in two problems: a. If the first tried NN is standby, and the operation takes effect only on Standby NN, which is not the expected result. b. If the operation needs to take effect on both NN, but it takes effect on only one NN. In the future, when there is a NN failover, there may have problems. Here I propose the following improvements: a. If the command can be classified as one of READ/WRITE/CHECKPOINT/JOURNAL operations, we should classify it clearly. b. If the command can not be classified as one of the above four operations, or if the command needs to take effect on both NN, we should send the request to both Active and Standby NNs. 2. Refresh protocols: RefreshAuthorizationPolicyProtocol, RefreshUserMappingsProtocol, RefreshUserMappingsProtocol, RefreshCallQueueProtocol Commands in this category, including refreshServiceAcl, refreshUserToGroupMapping, refreshSuperUserGroupsConfiguration and refreshCallQueue, are implemented by creating a corresponding RPC proxy and sending the request to remote NN. In the current implementation, these requests will be sent to a definite NN, no matter it is Active or Standby. Here I propose that we sent these requests to both NNs. 3. ClientDatanodeProtocol Commands in this category are handled correctly, no need to improve. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6580) FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper
[ https://issues.apache.org/jira/browse/HDFS-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040643#comment-14040643 ] Hudson commented on HDFS-6580: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #592 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/592/]) HDFS-6580. FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper. Contributed bu Zhilei Xu. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604704) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper - Key: HDFS-6580 URL: https://issues.apache.org/jira/browse/HDFS-6580 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Zhilei Xu Assignee: Zhilei Xu Labels: patch Fix For: 2.5.0 Attachments: patch_c89bff2bb7a06bb2b0c66a85acbd5113db6b0526.txt In FSNamesystem.java, getAuditFileInfo() is the canonical way to get file info for auditing purpose. getAuditFileInfo() returns null when auditing is disabled, and calls dir.getFileInfo() when auditing is enabled. One internal APIs, mkdirsInt() mistakenly use the raw dir.getFileInfo() to get file info for auditing. Should change to getAuditFileInfo(). Note that another internal API, startFileInt() uses dir.getFileInfo() correctly, because the returned file stat is returned out. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6580) FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper
[ https://issues.apache.org/jira/browse/HDFS-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040751#comment-14040751 ] Hudson commented on HDFS-6580: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1783 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1783/]) HDFS-6580. FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper. Contributed bu Zhilei Xu. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604704) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper - Key: HDFS-6580 URL: https://issues.apache.org/jira/browse/HDFS-6580 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Zhilei Xu Assignee: Zhilei Xu Labels: patch Fix For: 2.5.0 Attachments: patch_c89bff2bb7a06bb2b0c66a85acbd5113db6b0526.txt In FSNamesystem.java, getAuditFileInfo() is the canonical way to get file info for auditing purpose. getAuditFileInfo() returns null when auditing is disabled, and calls dir.getFileInfo() when auditing is enabled. One internal APIs, mkdirsInt() mistakenly use the raw dir.getFileInfo() to get file info for auditing. Should change to getAuditFileInfo(). Note that another internal API, startFileInt() uses dir.getFileInfo() correctly, because the returned file stat is returned out. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6507) Improve DFSAdmin to support HA cluster better
[ https://issues.apache.org/jira/browse/HDFS-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040752#comment-14040752 ] Hudson commented on HDFS-6507: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1783 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1783/]) HDFS-6507. Improve DFSAdmin to support HA cluster better. (Contributd by Zesheng Wu) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604692) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml Improve DFSAdmin to support HA cluster better - Key: HDFS-6507 URL: https://issues.apache.org/jira/browse/HDFS-6507 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.4.0 Reporter: Zesheng Wu Assignee: Zesheng Wu Fix For: 2.5.0 Attachments: HDFS-6507.1.patch, HDFS-6507.2.patch, HDFS-6507.3.patch, HDFS-6507.4-inprogress.patch, HDFS-6507.4.patch, HDFS-6507.5.patch, HDFS-6507.6.patch, HDFS-6507.7.patch, HDFS-6507.7.patch, HDFS-6507.8.patch Currently, the commands supported in DFSAdmin can be classified into three categories according to the protocol used: 1. ClientProtocol Commands in this category generally implement by calling the corresponding function of the DFSClient class, and will call the corresponding remote implementation function at the NN side finally. At the NN side, all these operations are classified into five categories: UNCHECKED, READ, WRITE, CHECKPOINT, JOURNAL. Active NN will allow all operations, and Standby NN only allows UNCHECKED operations. In the current implementation of DFSClient, it will connect one NN first, if the first NN is not Active and the operation is not allowed, it will failover to the second NN. So here comes the problem, some of the commands(setSafeMode, saveNameSpace, restoreFailedStorage, refreshNodes, setBalancerBandwidth, metaSave) in DFSAdmin are classified as UNCHECKED operations, and when executing these commands in the DFSAdmin command line, they will be sent to a definite NN, no matter it is Active or Standby. This may result in two problems: a. If the first tried NN is standby, and the operation takes effect only on Standby NN, which is not the expected result. b. If the operation needs to take effect on both NN, but it takes effect on only one NN. In the future, when there is a NN failover, there may have problems. Here I propose the following improvements: a. If the command can be classified as one of READ/WRITE/CHECKPOINT/JOURNAL operations, we should classify it clearly. b. If the command can not be classified as one of the above four operations, or if the command needs to take effect on both NN, we should send the request to both Active and Standby NNs. 2. Refresh protocols: RefreshAuthorizationPolicyProtocol, RefreshUserMappingsProtocol, RefreshUserMappingsProtocol, RefreshCallQueueProtocol Commands in this category, including refreshServiceAcl, refreshUserToGroupMapping, refreshSuperUserGroupsConfiguration and refreshCallQueue, are implemented by creating a corresponding RPC proxy and sending the request to remote NN. In the current implementation, these requests will be sent to a definite NN, no matter it is Active or Standby. Here I propose that we sent these requests to both NNs. 3. ClientDatanodeProtocol Commands in this category are handled correctly, no need to improve. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6580) FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper
[ https://issues.apache.org/jira/browse/HDFS-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040835#comment-14040835 ] Hudson commented on HDFS-6580: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1810 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1810/]) HDFS-6580. FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper. Contributed bu Zhilei Xu. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604704) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java FSNamesystem.mkdirsInt should call the getAuditFileInfo() wrapper - Key: HDFS-6580 URL: https://issues.apache.org/jira/browse/HDFS-6580 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: Zhilei Xu Assignee: Zhilei Xu Labels: patch Fix For: 2.5.0 Attachments: patch_c89bff2bb7a06bb2b0c66a85acbd5113db6b0526.txt In FSNamesystem.java, getAuditFileInfo() is the canonical way to get file info for auditing purpose. getAuditFileInfo() returns null when auditing is disabled, and calls dir.getFileInfo() when auditing is enabled. One internal APIs, mkdirsInt() mistakenly use the raw dir.getFileInfo() to get file info for auditing. Should change to getAuditFileInfo(). Note that another internal API, startFileInt() uses dir.getFileInfo() correctly, because the returned file stat is returned out. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6507) Improve DFSAdmin to support HA cluster better
[ https://issues.apache.org/jira/browse/HDFS-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040836#comment-14040836 ] Hudson commented on HDFS-6507: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1810 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1810/]) HDFS-6507. Improve DFSAdmin to support HA cluster better. (Contributd by Zesheng Wu) (vinayakumarb: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604692) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml Improve DFSAdmin to support HA cluster better - Key: HDFS-6507 URL: https://issues.apache.org/jira/browse/HDFS-6507 Project: Hadoop HDFS Issue Type: Improvement Components: tools Affects Versions: 2.4.0 Reporter: Zesheng Wu Assignee: Zesheng Wu Fix For: 2.5.0 Attachments: HDFS-6507.1.patch, HDFS-6507.2.patch, HDFS-6507.3.patch, HDFS-6507.4-inprogress.patch, HDFS-6507.4.patch, HDFS-6507.5.patch, HDFS-6507.6.patch, HDFS-6507.7.patch, HDFS-6507.7.patch, HDFS-6507.8.patch Currently, the commands supported in DFSAdmin can be classified into three categories according to the protocol used: 1. ClientProtocol Commands in this category generally implement by calling the corresponding function of the DFSClient class, and will call the corresponding remote implementation function at the NN side finally. At the NN side, all these operations are classified into five categories: UNCHECKED, READ, WRITE, CHECKPOINT, JOURNAL. Active NN will allow all operations, and Standby NN only allows UNCHECKED operations. In the current implementation of DFSClient, it will connect one NN first, if the first NN is not Active and the operation is not allowed, it will failover to the second NN. So here comes the problem, some of the commands(setSafeMode, saveNameSpace, restoreFailedStorage, refreshNodes, setBalancerBandwidth, metaSave) in DFSAdmin are classified as UNCHECKED operations, and when executing these commands in the DFSAdmin command line, they will be sent to a definite NN, no matter it is Active or Standby. This may result in two problems: a. If the first tried NN is standby, and the operation takes effect only on Standby NN, which is not the expected result. b. If the operation needs to take effect on both NN, but it takes effect on only one NN. In the future, when there is a NN failover, there may have problems. Here I propose the following improvements: a. If the command can be classified as one of READ/WRITE/CHECKPOINT/JOURNAL operations, we should classify it clearly. b. If the command can not be classified as one of the above four operations, or if the command needs to take effect on both NN, we should send the request to both Active and Standby NNs. 2. Refresh protocols: RefreshAuthorizationPolicyProtocol, RefreshUserMappingsProtocol, RefreshUserMappingsProtocol, RefreshCallQueueProtocol Commands in this category, including refreshServiceAcl, refreshUserToGroupMapping, refreshSuperUserGroupsConfiguration and refreshCallQueue, are implemented by creating a corresponding RPC proxy and sending the request to remote NN. In the current implementation, these requests will be sent to a definite NN, no matter it is Active or Standby. Here I propose that we sent these requests to both NNs. 3. ClientDatanodeProtocol Commands in this category are handled correctly, no need to improve. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041092#comment-14041092 ] Hudson commented on HDFS-6587: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5754 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5754/]) HDFS-6587. Bug in TestBPOfferService can cause test failure. (Contributed by Zhilei Xu) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604899) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6578) add toString method to DatanodeStorage for easier debugging
[ https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041354#comment-14041354 ] Hudson commented on HDFS-6578: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5755 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5755/]) HDFS-6578. add toString method to DatanodeStorage for easier debugging. (Contributed by Yongjun Zhang) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604942) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeStorage.java add toString method to DatanodeStorage for easier debugging --- Key: HDFS-6578 URL: https://issues.apache.org/jira/browse/HDFS-6578 Project: Hadoop HDFS Issue Type: Improvement Reporter: Yongjun Zhang Assignee: Yongjun Zhang Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6578.001.patch, HDFS-6578.002.patch It seems to be nice to add a toString() method for DatanodeStorage class, so we can print out its key info easier while doing debuging. Another thing is, in the end of BlockManager#processReport, there is the following message, {code} blockLog.info(BLOCK* processReport: from storage + storage.getStorageID() + node + nodeID + , blocks: + newReport.getNumberOfBlocks() + , processing time: + (endTime - startTime) + msecs); return !node.hasStaleStorages(); {code} We could add node.hasStaleStorages() to the log, and possibly replace storage.getSorateID() with the suggested storage.toString(). Any comments? thanks. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6562) Refactor rename() in FSDirectory
[ https://issues.apache.org/jira/browse/HDFS-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041851#comment-14041851 ] Hudson commented on HDFS-6562: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5757 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5757/]) HDFS-6562. Refactor rename() in FSDirectory. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605016) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java Refactor rename() in FSDirectory Key: HDFS-6562 URL: https://issues.apache.org/jira/browse/HDFS-6562 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6562.000.patch, HDFS-6562.001.patch, HDFS-6562.002.patch, HDFS-6562.003.patch, HDFS-6562.004.patch, HDFS-6562.005.patch, HDFS-6562.006.patch, HDFS-6562.007.patch Currently there are two variants of {{rename()}} sitting in {{FSDirectory}}. Both implementation shares quite a bit of common code. This jira proposes to clean up these two variants and extract the common code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6578) add toString method to DatanodeStorage for easier debugging
[ https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041992#comment-14041992 ] Hudson commented on HDFS-6578: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #593 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/593/]) HDFS-6578. add toString method to DatanodeStorage for easier debugging. (Contributed by Yongjun Zhang) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604942) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeStorage.java add toString method to DatanodeStorage for easier debugging --- Key: HDFS-6578 URL: https://issues.apache.org/jira/browse/HDFS-6578 Project: Hadoop HDFS Issue Type: Improvement Reporter: Yongjun Zhang Assignee: Yongjun Zhang Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6578.001.patch, HDFS-6578.002.patch It seems to be nice to add a toString() method for DatanodeStorage class, so we can print out its key info easier while doing debuging. Another thing is, in the end of BlockManager#processReport, there is the following message, {code} blockLog.info(BLOCK* processReport: from storage + storage.getStorageID() + node + nodeID + , blocks: + newReport.getNumberOfBlocks() + , processing time: + (endTime - startTime) + msecs); return !node.hasStaleStorages(); {code} We could add node.hasStaleStorages() to the log, and possibly replace storage.getSorateID() with the suggested storage.toString(). Any comments? thanks. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6562) Refactor rename() in FSDirectory
[ https://issues.apache.org/jira/browse/HDFS-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041990#comment-14041990 ] Hudson commented on HDFS-6562: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #593 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/593/]) HDFS-6562. Refactor rename() in FSDirectory. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605016) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java Refactor rename() in FSDirectory Key: HDFS-6562 URL: https://issues.apache.org/jira/browse/HDFS-6562 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6562.000.patch, HDFS-6562.001.patch, HDFS-6562.002.patch, HDFS-6562.003.patch, HDFS-6562.004.patch, HDFS-6562.005.patch, HDFS-6562.006.patch, HDFS-6562.007.patch Currently there are two variants of {{rename()}} sitting in {{FSDirectory}}. Both implementation shares quite a bit of common code. This jira proposes to clean up these two variants and extract the common code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041988#comment-14041988 ] Hudson commented on HDFS-6587: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #593 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/593/]) HDFS-6587. Bug in TestBPOfferService can cause test failure. (Contributed by Zhilei Xu) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604899) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6486) Add user doc for XAttrs via WebHDFS.
[ https://issues.apache.org/jira/browse/HDFS-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042042#comment-14042042 ] Hudson commented on HDFS-6486: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5759 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5759/]) HDFS-6486. Add user doc for XAttrs via WebHDFS. Contributed by Yi Liu. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605062) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm Add user doc for XAttrs via WebHDFS. Key: HDFS-6486 URL: https://issues.apache.org/jira/browse/HDFS-6486 Project: Hadoop HDFS Issue Type: Task Components: webhdfs Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6486.patch Add the user doc for XAttrs via WebHDFS. Set xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=SETXATTRxattr.name=XATTRNAMExattr.value=XATTRVALUEflag=FLAG' {code} Remove xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=REMOVEXATTRxattr.name=XATTRNAME' {code} Get an xattr: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAMEencoding=ENCODING' {code} Get multiple xattrs (XATTRNAME1, XATTRNAME2, XATTRNAME3): {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAME1xattr.name=XATTRNAME2xattr.name=XATTRNAME3encoding=ENCODING' {code} Get all xattrs: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSencoding=ENCODING' {code} List xattrs {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=LISTXATTRS' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042142#comment-14042142 ] Hudson commented on HDFS-6587: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1784 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1784/]) HDFS-6587. Bug in TestBPOfferService can cause test failure. (Contributed by Zhilei Xu) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604899) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6562) Refactor rename() in FSDirectory
[ https://issues.apache.org/jira/browse/HDFS-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042144#comment-14042144 ] Hudson commented on HDFS-6562: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1784 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1784/]) HDFS-6562. Refactor rename() in FSDirectory. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605016) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java Refactor rename() in FSDirectory Key: HDFS-6562 URL: https://issues.apache.org/jira/browse/HDFS-6562 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6562.000.patch, HDFS-6562.001.patch, HDFS-6562.002.patch, HDFS-6562.003.patch, HDFS-6562.004.patch, HDFS-6562.005.patch, HDFS-6562.006.patch, HDFS-6562.007.patch Currently there are two variants of {{rename()}} sitting in {{FSDirectory}}. Both implementation shares quite a bit of common code. This jira proposes to clean up these two variants and extract the common code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6578) add toString method to DatanodeStorage for easier debugging
[ https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042146#comment-14042146 ] Hudson commented on HDFS-6578: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1784 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1784/]) HDFS-6578. add toString method to DatanodeStorage for easier debugging. (Contributed by Yongjun Zhang) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604942) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeStorage.java add toString method to DatanodeStorage for easier debugging --- Key: HDFS-6578 URL: https://issues.apache.org/jira/browse/HDFS-6578 Project: Hadoop HDFS Issue Type: Improvement Reporter: Yongjun Zhang Assignee: Yongjun Zhang Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6578.001.patch, HDFS-6578.002.patch It seems to be nice to add a toString() method for DatanodeStorage class, so we can print out its key info easier while doing debuging. Another thing is, in the end of BlockManager#processReport, there is the following message, {code} blockLog.info(BLOCK* processReport: from storage + storage.getStorageID() + node + nodeID + , blocks: + newReport.getNumberOfBlocks() + , processing time: + (endTime - startTime) + msecs); return !node.hasStaleStorages(); {code} We could add node.hasStaleStorages() to the log, and possibly replace storage.getSorateID() with the suggested storage.toString(). Any comments? thanks. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6578) add toString method to DatanodeStorage for easier debugging
[ https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042230#comment-14042230 ] Hudson commented on HDFS-6578: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1811 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1811/]) HDFS-6578. add toString method to DatanodeStorage for easier debugging. (Contributed by Yongjun Zhang) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604942) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeStorage.java add toString method to DatanodeStorage for easier debugging --- Key: HDFS-6578 URL: https://issues.apache.org/jira/browse/HDFS-6578 Project: Hadoop HDFS Issue Type: Improvement Reporter: Yongjun Zhang Assignee: Yongjun Zhang Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6578.001.patch, HDFS-6578.002.patch It seems to be nice to add a toString() method for DatanodeStorage class, so we can print out its key info easier while doing debuging. Another thing is, in the end of BlockManager#processReport, there is the following message, {code} blockLog.info(BLOCK* processReport: from storage + storage.getStorageID() + node + nodeID + , blocks: + newReport.getNumberOfBlocks() + , processing time: + (endTime - startTime) + msecs); return !node.hasStaleStorages(); {code} We could add node.hasStaleStorages() to the log, and possibly replace storage.getSorateID() with the suggested storage.toString(). Any comments? thanks. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6562) Refactor rename() in FSDirectory
[ https://issues.apache.org/jira/browse/HDFS-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042228#comment-14042228 ] Hudson commented on HDFS-6562: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1811 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1811/]) HDFS-6562. Refactor rename() in FSDirectory. Contributed by Haohui Mai. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605016) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java Refactor rename() in FSDirectory Key: HDFS-6562 URL: https://issues.apache.org/jira/browse/HDFS-6562 Project: Hadoop HDFS Issue Type: Sub-task Components: namenode Reporter: Haohui Mai Assignee: Haohui Mai Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6562.000.patch, HDFS-6562.001.patch, HDFS-6562.002.patch, HDFS-6562.003.patch, HDFS-6562.004.patch, HDFS-6562.005.patch, HDFS-6562.006.patch, HDFS-6562.007.patch Currently there are two variants of {{rename()}} sitting in {{FSDirectory}}. Both implementation shares quite a bit of common code. This jira proposes to clean up these two variants and extract the common code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042226#comment-14042226 ] Hudson commented on HDFS-6587: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1811 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1811/]) HDFS-6587. Bug in TestBPOfferService can cause test failure. (Contributed by Zhilei Xu) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604899) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6486) Add user doc for XAttrs via WebHDFS.
[ https://issues.apache.org/jira/browse/HDFS-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042231#comment-14042231 ] Hudson commented on HDFS-6486: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1811 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1811/]) HDFS-6486. Add user doc for XAttrs via WebHDFS. Contributed by Yi Liu. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605062) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm Add user doc for XAttrs via WebHDFS. Key: HDFS-6486 URL: https://issues.apache.org/jira/browse/HDFS-6486 Project: Hadoop HDFS Issue Type: Task Components: webhdfs Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6486.patch Add the user doc for XAttrs via WebHDFS. Set xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=SETXATTRxattr.name=XATTRNAMExattr.value=XATTRVALUEflag=FLAG' {code} Remove xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=REMOVEXATTRxattr.name=XATTRNAME' {code} Get an xattr: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAMEencoding=ENCODING' {code} Get multiple xattrs (XATTRNAME1, XATTRNAME2, XATTRNAME3): {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAME1xattr.name=XATTRNAME2xattr.name=XATTRNAME3encoding=ENCODING' {code} Get all xattrs: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSencoding=ENCODING' {code} List xattrs {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=LISTXATTRS' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6430) HTTPFS - Implement XAttr support
[ https://issues.apache.org/jira/browse/HDFS-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042296#comment-14042296 ] Hudson commented on HDFS-6430: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5763 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5763/]) HDFS-6430. HTTPFS - Implement XAttr support. (Yi Liu via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605118) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/Parameters.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoXAttrs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt HTTPFS - Implement XAttr support Key: HDFS-6430 URL: https://issues.apache.org/jira/browse/HDFS-6430 Project: Hadoop HDFS Issue Type: Task Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 2.5.0 Attachments: HDFS-6430.1.patch, HDFS-6430.2.patch, HDFS-6430.3.patch, HDFS-6430.4.patch, HDFS-6430.5.patch, HDFS-6430.patch Add xattr support to HttpFS. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6593) Move SnapshotDiffInfo out of INodeDirectorySnapshottable
[ https://issues.apache.org/jira/browse/HDFS-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042606#comment-14042606 ] Hudson commented on HDFS-6593: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5770 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5770/]) HDFS-6593. Move SnapshotDiffInfo out of INodeDirectorySnapshottable. Contributed by Jing Zhao. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605169) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java Move SnapshotDiffInfo out of INodeDirectorySnapshottable Key: HDFS-6593 URL: https://issues.apache.org/jira/browse/HDFS-6593 Project: Hadoop HDFS Issue Type: Improvement Components: namenode, snapshots Reporter: Jing Zhao Assignee: Jing Zhao Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6593.000.patch, HDFS-6593.001.patch, HDFS-6593.002.patch Per discussion in HDFS-4667, we can move SnapshotDiffInfo out of INodeDirectorySnapshottable as an individual class. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042672#comment-14042672 ] Hudson commented on HDFS-6587: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5771 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5771/]) HDFS-6587. Fix a typo in message issued from explorer.js. Contributed by Yongjun Zhang. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605184) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException
[ https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042829#comment-14042829 ] Hudson commented on HDFS-6475: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5774 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5774/]) HDFS-6475. WebHdfs clients fail without retry because incorrect handling of StandbyException. Contributed by Yongjun Zhang. (atm: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605217) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java WebHdfs clients fail without retry because incorrect handling of StandbyException - Key: HDFS-6475 URL: https://issues.apache.org/jira/browse/HDFS-6475 Project: Hadoop HDFS Issue Type: Bug Components: ha, webhdfs Affects Versions: 2.4.0 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Fix For: 2.5.0 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, HDFS-6475.005.patch, HDFS-6475.006.patch, HDFS-6475.007.patch, HDFS-6475.008.patch, HDFS-6475.009.patch With WebHdfs clients connected to a HA HDFS service, the delegation token is previously initialized with the active NN. When clients try to issue request, the NN it contacts is stored in a map returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact the NN based on the order, so likely the first one it runs into is StandbyNN. If the StandbyNN doesn't have the updated client crediential, it will throw a s SecurityException that wraps StandbyException. The client is expected to retry another NN, but due to the insufficient handling of SecurityException mentioned above, it failed. Example message: {code} {RemoteException={message=Failed to obtain user group information: org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException, javaCl assName=java.lang.SecurityException, exception=SecurityException}} org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to obtain user group information: org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696) at kclient1.kclient$1.run(kclient.java:64) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528) at kclient1.kclient.main(kclient.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043323#comment-14043323 ] Hudson commented on HDFS-6587: -- FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/594/]) HDFS-6587. Fix a typo in message issued from explorer.js. Contributed by Yongjun Zhang. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605184) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6593) Move SnapshotDiffInfo out of INodeDirectorySnapshottable
[ https://issues.apache.org/jira/browse/HDFS-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043335#comment-14043335 ] Hudson commented on HDFS-6593: -- FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/594/]) HDFS-6593. Move SnapshotDiffInfo out of INodeDirectorySnapshottable. Contributed by Jing Zhao. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605169) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java Move SnapshotDiffInfo out of INodeDirectorySnapshottable Key: HDFS-6593 URL: https://issues.apache.org/jira/browse/HDFS-6593 Project: Hadoop HDFS Issue Type: Improvement Components: namenode, snapshots Reporter: Jing Zhao Assignee: Jing Zhao Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6593.000.patch, HDFS-6593.001.patch, HDFS-6593.002.patch Per discussion in HDFS-4667, we can move SnapshotDiffInfo out of INodeDirectorySnapshottable as an individual class. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6486) Add user doc for XAttrs via WebHDFS.
[ https://issues.apache.org/jira/browse/HDFS-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043327#comment-14043327 ] Hudson commented on HDFS-6486: -- FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/594/]) HDFS-6486. Add user doc for XAttrs via WebHDFS. Contributed by Yi Liu. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605062) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm Add user doc for XAttrs via WebHDFS. Key: HDFS-6486 URL: https://issues.apache.org/jira/browse/HDFS-6486 Project: Hadoop HDFS Issue Type: Task Components: webhdfs Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6486.patch Add the user doc for XAttrs via WebHDFS. Set xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=SETXATTRxattr.name=XATTRNAMExattr.value=XATTRVALUEflag=FLAG' {code} Remove xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=REMOVEXATTRxattr.name=XATTRNAME' {code} Get an xattr: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAMEencoding=ENCODING' {code} Get multiple xattrs (XATTRNAME1, XATTRNAME2, XATTRNAME3): {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAME1xattr.name=XATTRNAME2xattr.name=XATTRNAME3encoding=ENCODING' {code} Get all xattrs: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSencoding=ENCODING' {code} List xattrs {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=LISTXATTRS' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6430) HTTPFS - Implement XAttr support
[ https://issues.apache.org/jira/browse/HDFS-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043337#comment-14043337 ] Hudson commented on HDFS-6430: -- FAILURE: Integrated in Hadoop-Yarn-trunk #594 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/594/]) HDFS-6430. HTTPFS - Implement XAttr support. (Yi Liu via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605118) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/Parameters.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoXAttrs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt HTTPFS - Implement XAttr support Key: HDFS-6430 URL: https://issues.apache.org/jira/browse/HDFS-6430 Project: Hadoop HDFS Issue Type: Task Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 2.5.0 Attachments: HDFS-6430.1.patch, HDFS-6430.2.patch, HDFS-6430.3.patch, HDFS-6430.4.patch, HDFS-6430.5.patch, HDFS-6430.patch Add xattr support to HttpFS. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException
[ https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043493#comment-14043493 ] Hudson commented on HDFS-6475: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/]) HDFS-6475. WebHdfs clients fail without retry because incorrect handling of StandbyException. Contributed by Yongjun Zhang. (atm: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605217) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java WebHdfs clients fail without retry because incorrect handling of StandbyException - Key: HDFS-6475 URL: https://issues.apache.org/jira/browse/HDFS-6475 Project: Hadoop HDFS Issue Type: Bug Components: ha, webhdfs Affects Versions: 2.4.0 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Fix For: 2.5.0 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, HDFS-6475.005.patch, HDFS-6475.006.patch, HDFS-6475.007.patch, HDFS-6475.008.patch, HDFS-6475.009.patch With WebHdfs clients connected to a HA HDFS service, the delegation token is previously initialized with the active NN. When clients try to issue request, the NN it contacts is stored in a map returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact the NN based on the order, so likely the first one it runs into is StandbyNN. If the StandbyNN doesn't have the updated client crediential, it will throw a s SecurityException that wraps StandbyException. The client is expected to retry another NN, but due to the insufficient handling of SecurityException mentioned above, it failed. Example message: {code} {RemoteException={message=Failed to obtain user group information: org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException, javaCl assName=java.lang.SecurityException, exception=SecurityException}} org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to obtain user group information: org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696) at kclient1.kclient$1.run(kclient.java:64) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528) at kclient1.kclient.main(kclient.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043491#comment-14043491 ] Hudson commented on HDFS-6587: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/]) HDFS-6587. Fix a typo in message issued from explorer.js. Contributed by Yongjun Zhang. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605184) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6486) Add user doc for XAttrs via WebHDFS.
[ https://issues.apache.org/jira/browse/HDFS-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043495#comment-14043495 ] Hudson commented on HDFS-6486: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/]) HDFS-6486. Add user doc for XAttrs via WebHDFS. Contributed by Yi Liu. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605062) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm Add user doc for XAttrs via WebHDFS. Key: HDFS-6486 URL: https://issues.apache.org/jira/browse/HDFS-6486 Project: Hadoop HDFS Issue Type: Task Components: webhdfs Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6486.patch Add the user doc for XAttrs via WebHDFS. Set xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=SETXATTRxattr.name=XATTRNAMExattr.value=XATTRVALUEflag=FLAG' {code} Remove xattr: {code} curl -i -X PUT 'http://HOST:PORT/webhdfs/v1/PATH?op=REMOVEXATTRxattr.name=XATTRNAME' {code} Get an xattr: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAMEencoding=ENCODING' {code} Get multiple xattrs (XATTRNAME1, XATTRNAME2, XATTRNAME3): {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSxattr.name=XATTRNAME1xattr.name=XATTRNAME2xattr.name=XATTRNAME3encoding=ENCODING' {code} Get all xattrs: {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=GETXATTRSencoding=ENCODING' {code} List xattrs {code} curl -i 'http://HOST:PORT/webhdfs/v1/PATH?op=LISTXATTRS' {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6430) HTTPFS - Implement XAttr support
[ https://issues.apache.org/jira/browse/HDFS-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043505#comment-14043505 ] Hudson commented on HDFS-6430: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/]) HDFS-6430. HTTPFS - Implement XAttr support. (Yi Liu via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605118) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/Parameters.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoXAttrs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt HTTPFS - Implement XAttr support Key: HDFS-6430 URL: https://issues.apache.org/jira/browse/HDFS-6430 Project: Hadoop HDFS Issue Type: Task Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 2.5.0 Attachments: HDFS-6430.1.patch, HDFS-6430.2.patch, HDFS-6430.3.patch, HDFS-6430.4.patch, HDFS-6430.5.patch, HDFS-6430.patch Add xattr support to HttpFS. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6593) Move SnapshotDiffInfo out of INodeDirectorySnapshottable
[ https://issues.apache.org/jira/browse/HDFS-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043503#comment-14043503 ] Hudson commented on HDFS-6593: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1785 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1785/]) HDFS-6593. Move SnapshotDiffInfo out of INodeDirectorySnapshottable. Contributed by Jing Zhao. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605169) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java Move SnapshotDiffInfo out of INodeDirectorySnapshottable Key: HDFS-6593 URL: https://issues.apache.org/jira/browse/HDFS-6593 Project: Hadoop HDFS Issue Type: Improvement Components: namenode, snapshots Reporter: Jing Zhao Assignee: Jing Zhao Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6593.000.patch, HDFS-6593.001.patch, HDFS-6593.002.patch Per discussion in HDFS-4667, we can move SnapshotDiffInfo out of INodeDirectorySnapshottable as an individual class. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException
[ https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043677#comment-14043677 ] Hudson commented on HDFS-6475: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/]) HDFS-6475. WebHdfs clients fail without retry because incorrect handling of StandbyException. Contributed by Yongjun Zhang. (atm: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605217) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java WebHdfs clients fail without retry because incorrect handling of StandbyException - Key: HDFS-6475 URL: https://issues.apache.org/jira/browse/HDFS-6475 Project: Hadoop HDFS Issue Type: Bug Components: ha, webhdfs Affects Versions: 2.4.0 Reporter: Yongjun Zhang Assignee: Yongjun Zhang Fix For: 2.5.0 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, HDFS-6475.005.patch, HDFS-6475.006.patch, HDFS-6475.007.patch, HDFS-6475.008.patch, HDFS-6475.009.patch With WebHdfs clients connected to a HA HDFS service, the delegation token is previously initialized with the active NN. When clients try to issue request, the NN it contacts is stored in a map returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact the NN based on the order, so likely the first one it runs into is StandbyNN. If the StandbyNN doesn't have the updated client crediential, it will throw a s SecurityException that wraps StandbyException. The client is expected to retry another NN, but due to the insufficient handling of SecurityException mentioned above, it failed. Example message: {code} {RemoteException={message=Failed to obtain user group information: org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException, javaCl assName=java.lang.SecurityException, exception=SecurityException}} org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to obtain user group information: org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696) at kclient1.kclient$1.run(kclient.java:64) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528) at kclient1.kclient.main(kclient.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6430) HTTPFS - Implement XAttr support
[ https://issues.apache.org/jira/browse/HDFS-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043688#comment-14043688 ] Hudson commented on HDFS-6430: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/]) HDFS-6430. HTTPFS - Implement XAttr support. (Yi Liu via tucu) (tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605118) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSUtils.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/Parameters.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerNoXAttrs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestHdfsHelper.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt HTTPFS - Implement XAttr support Key: HDFS-6430 URL: https://issues.apache.org/jira/browse/HDFS-6430 Project: Hadoop HDFS Issue Type: Task Affects Versions: 3.0.0 Reporter: Yi Liu Assignee: Yi Liu Fix For: 2.5.0 Attachments: HDFS-6430.1.patch, HDFS-6430.2.patch, HDFS-6430.3.patch, HDFS-6430.4.patch, HDFS-6430.5.patch, HDFS-6430.patch Add xattr support to HttpFS. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6593) Move SnapshotDiffInfo out of INodeDirectorySnapshottable
[ https://issues.apache.org/jira/browse/HDFS-6593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043686#comment-14043686 ] Hudson commented on HDFS-6593: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/]) HDFS-6593. Move SnapshotDiffInfo out of INodeDirectorySnapshottable. Contributed by Jing Zhao. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605169) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffInfo.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java Move SnapshotDiffInfo out of INodeDirectorySnapshottable Key: HDFS-6593 URL: https://issues.apache.org/jira/browse/HDFS-6593 Project: Hadoop HDFS Issue Type: Improvement Components: namenode, snapshots Reporter: Jing Zhao Assignee: Jing Zhao Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6593.000.patch, HDFS-6593.001.patch, HDFS-6593.002.patch Per discussion in HDFS-4667, we can move SnapshotDiffInfo out of INodeDirectorySnapshottable as an individual class. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6587) Bug in TestBPOfferService can cause test failure
[ https://issues.apache.org/jira/browse/HDFS-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043675#comment-14043675 ] Hudson commented on HDFS-6587: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1812 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1812/]) HDFS-6587. Fix a typo in message issued from explorer.js. Contributed by Yongjun Zhang. (wheat9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605184) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js Bug in TestBPOfferService can cause test failure Key: HDFS-6587 URL: https://issues.apache.org/jira/browse/HDFS-6587 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.0 Reporter: Zhilei Xu Assignee: Zhilei Xu Fix For: 3.0.0, 2.5.0 Attachments: patch_TestBPOfferService.txt need to fix a bug in TestBPOfferService#waitForBlockReceived that fails the trunk, e.g. in Build #1781. Details: in this test, the utility function waitForBlockReceived() has a bug: parameter mockNN is never used but hard-coded mockNN1 is used. This bug introduces undeterministic test failure when testBasicFunctionality() calls ret = waitForBlockReceived(FAKE_BLOCK, mockNN2); and the call finishes before the actual interaction with mockNN2 happens. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6595) Configure the maximum threads allowed for balancing on datanodes
[ https://issues.apache.org/jira/browse/HDFS-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14043953#comment-14043953 ] Hudson commented on HDFS-6595: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5779 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5779/]) HDFS-6595. Allow the maximum threads for balancing on datanodes to be configurable. Contributed by Benoy Antony (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605565) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java Configure the maximum threads allowed for balancing on datanodes Key: HDFS-6595 URL: https://issues.apache.org/jira/browse/HDFS-6595 Project: Hadoop HDFS Issue Type: Improvement Components: balancer, datanode Reporter: Benoy Antony Assignee: Benoy Antony Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6595.patch, HDFS-6595.patch Currently datanode allows a max of 5 threads to be used for balancing. In some cases, , it may make sense to use a different number of threads to the purpose of moving. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6595) Configure the maximum threads allowed for balancing on datanodes
[ https://issues.apache.org/jira/browse/HDFS-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14044568#comment-14044568 ] Hudson commented on HDFS-6595: -- FAILURE: Integrated in Hadoop-Yarn-trunk #595 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/595/]) HDFS-6595. Allow the maximum threads for balancing on datanodes to be configurable. Contributed by Benoy Antony (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605565) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java Configure the maximum threads allowed for balancing on datanodes Key: HDFS-6595 URL: https://issues.apache.org/jira/browse/HDFS-6595 Project: Hadoop HDFS Issue Type: Improvement Components: balancer, datanode Reporter: Benoy Antony Assignee: Benoy Antony Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6595.patch, HDFS-6595.patch Currently datanode allows a max of 5 threads to be used for balancing. In some cases, , it may make sense to use a different number of threads to the purpose of moving. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6595) Configure the maximum threads allowed for balancing on datanodes
[ https://issues.apache.org/jira/browse/HDFS-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14044705#comment-14044705 ] Hudson commented on HDFS-6595: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1786 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1786/]) HDFS-6595. Allow the maximum threads for balancing on datanodes to be configurable. Contributed by Benoy Antony (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605565) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java Configure the maximum threads allowed for balancing on datanodes Key: HDFS-6595 URL: https://issues.apache.org/jira/browse/HDFS-6595 Project: Hadoop HDFS Issue Type: Improvement Components: balancer, datanode Reporter: Benoy Antony Assignee: Benoy Antony Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6595.patch, HDFS-6595.patch Currently datanode allows a max of 5 threads to be used for balancing. In some cases, , it may make sense to use a different number of threads to the purpose of moving. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6595) Configure the maximum threads allowed for balancing on datanodes
[ https://issues.apache.org/jira/browse/HDFS-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14044749#comment-14044749 ] Hudson commented on HDFS-6595: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1813 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1813/]) HDFS-6595. Allow the maximum threads for balancing on datanodes to be configurable. Contributed by Benoy Antony (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605565) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java Configure the maximum threads allowed for balancing on datanodes Key: HDFS-6595 URL: https://issues.apache.org/jira/browse/HDFS-6595 Project: Hadoop HDFS Issue Type: Improvement Components: balancer, datanode Reporter: Benoy Antony Assignee: Benoy Antony Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6595.patch, HDFS-6595.patch Currently datanode allows a max of 5 threads to be used for balancing. In some cases, , it may make sense to use a different number of threads to the purpose of moving. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6572) Add an option to the NameNode that prints the software and on-disk image versions
[ https://issues.apache.org/jira/browse/HDFS-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14045317#comment-14045317 ] Hudson commented on HDFS-6572: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5786 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5786/]) HDFS-6572. Add an option to the NameNode that prints the software and on-disk image versions. Contributed by Charles Lamb. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605928) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java Add an option to the NameNode that prints the software and on-disk image versions - Key: HDFS-6572 URL: https://issues.apache.org/jira/browse/HDFS-6572 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6572.001.patch, HDFS-6572.002.patch The HDFS namenode should have a startup option that prints the metadata versions of both the software and the on-disk version. This will be useful for debugging certain situations. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6572) Add an option to the NameNode that prints the software and on-disk image versions
[ https://issues.apache.org/jira/browse/HDFS-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14045819#comment-14045819 ] Hudson commented on HDFS-6572: -- FAILURE: Integrated in Hadoop-Yarn-trunk #596 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/596/]) HDFS-6572. Add an option to the NameNode that prints the software and on-disk image versions. Contributed by Charles Lamb. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605928) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java Add an option to the NameNode that prints the software and on-disk image versions - Key: HDFS-6572 URL: https://issues.apache.org/jira/browse/HDFS-6572 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6572.001.patch, HDFS-6572.002.patch The HDFS namenode should have a startup option that prints the metadata versions of both the software and the on-disk version. This will be useful for debugging certain situations. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6572) Add an option to the NameNode that prints the software and on-disk image versions
[ https://issues.apache.org/jira/browse/HDFS-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14045977#comment-14045977 ] Hudson commented on HDFS-6572: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1787 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1787/]) HDFS-6572. Add an option to the NameNode that prints the software and on-disk image versions. Contributed by Charles Lamb. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605928) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java Add an option to the NameNode that prints the software and on-disk image versions - Key: HDFS-6572 URL: https://issues.apache.org/jira/browse/HDFS-6572 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6572.001.patch, HDFS-6572.002.patch The HDFS namenode should have a startup option that prints the metadata versions of both the software and the on-disk version. This will be useful for debugging certain situations. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6572) Add an option to the NameNode that prints the software and on-disk image versions
[ https://issues.apache.org/jira/browse/HDFS-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14046035#comment-14046035 ] Hudson commented on HDFS-6572: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1814 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1814/]) HDFS-6572. Add an option to the NameNode that prints the software and on-disk image versions. Contributed by Charles Lamb. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1605928) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java Add an option to the NameNode that prints the software and on-disk image versions - Key: HDFS-6572 URL: https://issues.apache.org/jira/browse/HDFS-6572 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6572.001.patch, HDFS-6572.002.patch The HDFS namenode should have a startup option that prints the metadata versions of both the software and the on-disk version. This will be useful for debugging certain situations. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6556) Refine XAttr permissions
[ https://issues.apache.org/jira/browse/HDFS-6556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14046824#comment-14046824 ] Hudson commented on HDFS-6556: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5795 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5795/]) HDFS-6556. Refine XAttr permissions. Contributed by Uma Maheswara Rao G. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606320) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java Refine XAttr permissions Key: HDFS-6556 URL: https://issues.apache.org/jira/browse/HDFS-6556 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.5.0 Reporter: Yi Liu Assignee: Uma Maheswara Rao G Attachments: RefinedPermissions-HDFS-6556-1.patch, RefinedPermissions-HDFS-6556.patch, refinedPermissions-HDFS-6556-2.patch, refinedPermissions-HDFS-6556-3.patch After discuss with Uma, we should refine setting permissions of {{user}} and {{trusted}} namespace xattrs. *1.* For {{user}} namespace xattrs, In HDFS-6374, says setXAttr should require the user to be the owner of the file or directory, we have a bit misunderstanding. It actually is: {quote} The access permissions for user attributes are defined by the file permission bits. only regular files and directories can have extended attributes. For sticky directories, only the owner and privileged user can write attributes. {quote} We can refer to linux source code in http://lxr.free-electrons.com/source/fs/xattr.c?v=2.6.35 I also check in linux, it's controlled by the file permission bits for regular files and directories (not sticky). *2.* For {{trusted}} namespace, currently we require the user should be owner and superuser. Actually superuser is enough. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6556) Refine XAttr permissions
[ https://issues.apache.org/jira/browse/HDFS-6556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14046884#comment-14046884 ] Hudson commented on HDFS-6556: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1815 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1815/]) HDFS-6556. Refine XAttr permissions. Contributed by Uma Maheswara Rao G. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606320) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java Refine XAttr permissions Key: HDFS-6556 URL: https://issues.apache.org/jira/browse/HDFS-6556 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.5.0 Reporter: Yi Liu Assignee: Uma Maheswara Rao G Fix For: 3.0.0, 2.5.0 Attachments: RefinedPermissions-HDFS-6556-1.patch, RefinedPermissions-HDFS-6556.patch, refinedPermissions-HDFS-6556-2.patch, refinedPermissions-HDFS-6556-3.patch After discuss with Uma, we should refine setting permissions of {{user}} and {{trusted}} namespace xattrs. *1.* For {{user}} namespace xattrs, In HDFS-6374, says setXAttr should require the user to be the owner of the file or directory, we have a bit misunderstanding. It actually is: {quote} The access permissions for user attributes are defined by the file permission bits. only regular files and directories can have extended attributes. For sticky directories, only the owner and privileged user can write attributes. {quote} We can refer to linux source code in http://lxr.free-electrons.com/source/fs/xattr.c?v=2.6.35 I also check in linux, it's controlled by the file permission bits for regular files and directories (not sticky). *2.* For {{trusted}} namespace, currently we require the user should be owner and superuser. Actually superuser is enough. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6601) Issues in finalizing rolling upgrade when there is a layout version change
[ https://issues.apache.org/jira/browse/HDFS-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14046891#comment-14046891 ] Hudson commented on HDFS-6601: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5796 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5796/]) HDFS-6601. Issues in finalizing rolling upgrade when there is a layout version change. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606371) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java Issues in finalizing rolling upgrade when there is a layout version change -- Key: HDFS-6601 URL: https://issues.apache.org/jira/browse/HDFS-6601 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Kihwal Lee Assignee: Kihwal Lee Priority: Blocker Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6601.patch After HDFS-6545, we have noticed a couple of issues. - The storage dir's VERSION file is not properly updated. This becomes a problem when there is a layout version change. We can have the finalization do {{storage.writeAll()}} - {{OP_ROLLING_UPGRADE_FINALIZE}} cannot be replayed, once the corresponding {{OP_ROLLING_UPGRADE_START}} is consumed and a new fsimage is created (e.g. rollback image). On restart, NN terminates complaining it can't finalize something that it didn't start. We can make NN ignore {{OP_ROLLING_UPGRADE_FINALIZE}} if no rolling upgrade is in progress. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6556) Refine XAttr permissions
[ https://issues.apache.org/jira/browse/HDFS-6556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047098#comment-14047098 ] Hudson commented on HDFS-6556: -- FAILURE: Integrated in Hadoop-Yarn-trunk #598 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/598/]) HDFS-6556. Refine XAttr permissions. Contributed by Uma Maheswara Rao G. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606320) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java Refine XAttr permissions Key: HDFS-6556 URL: https://issues.apache.org/jira/browse/HDFS-6556 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.5.0 Reporter: Yi Liu Assignee: Uma Maheswara Rao G Fix For: 3.0.0, 2.5.0 Attachments: RefinedPermissions-HDFS-6556-1.patch, RefinedPermissions-HDFS-6556.patch, refinedPermissions-HDFS-6556-2.patch, refinedPermissions-HDFS-6556-3.patch After discuss with Uma, we should refine setting permissions of {{user}} and {{trusted}} namespace xattrs. *1.* For {{user}} namespace xattrs, In HDFS-6374, says setXAttr should require the user to be the owner of the file or directory, we have a bit misunderstanding. It actually is: {quote} The access permissions for user attributes are defined by the file permission bits. only regular files and directories can have extended attributes. For sticky directories, only the owner and privileged user can write attributes. {quote} We can refer to linux source code in http://lxr.free-electrons.com/source/fs/xattr.c?v=2.6.35 I also check in linux, it's controlled by the file permission bits for regular files and directories (not sticky). *2.* For {{trusted}} namespace, currently we require the user should be owner and superuser. Actually superuser is enough. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6601) Issues in finalizing rolling upgrade when there is a layout version change
[ https://issues.apache.org/jira/browse/HDFS-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047097#comment-14047097 ] Hudson commented on HDFS-6601: -- FAILURE: Integrated in Hadoop-Yarn-trunk #598 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/598/]) HDFS-6601. Issues in finalizing rolling upgrade when there is a layout version change. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606371) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java Issues in finalizing rolling upgrade when there is a layout version change -- Key: HDFS-6601 URL: https://issues.apache.org/jira/browse/HDFS-6601 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Kihwal Lee Assignee: Kihwal Lee Priority: Blocker Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6601.patch After HDFS-6545, we have noticed a couple of issues. - The storage dir's VERSION file is not properly updated. This becomes a problem when there is a layout version change. We can have the finalization do {{storage.writeAll()}} - {{OP_ROLLING_UPGRADE_FINALIZE}} cannot be replayed, once the corresponding {{OP_ROLLING_UPGRADE_START}} is consumed and a new fsimage is created (e.g. rollback image). On restart, NN terminates complaining it can't finalize something that it didn't start. We can make NN ignore {{OP_ROLLING_UPGRADE_FINALIZE}} if no rolling upgrade is in progress. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6601) Issues in finalizing rolling upgrade when there is a layout version change
[ https://issues.apache.org/jira/browse/HDFS-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047128#comment-14047128 ] Hudson commented on HDFS-6601: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1789 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1789/]) HDFS-6601. Issues in finalizing rolling upgrade when there is a layout version change. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606371) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java Issues in finalizing rolling upgrade when there is a layout version change -- Key: HDFS-6601 URL: https://issues.apache.org/jira/browse/HDFS-6601 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Kihwal Lee Assignee: Kihwal Lee Priority: Blocker Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6601.patch After HDFS-6545, we have noticed a couple of issues. - The storage dir's VERSION file is not properly updated. This becomes a problem when there is a layout version change. We can have the finalization do {{storage.writeAll()}} - {{OP_ROLLING_UPGRADE_FINALIZE}} cannot be replayed, once the corresponding {{OP_ROLLING_UPGRADE_START}} is consumed and a new fsimage is created (e.g. rollback image). On restart, NN terminates complaining it can't finalize something that it didn't start. We can make NN ignore {{OP_ROLLING_UPGRADE_FINALIZE}} if no rolling upgrade is in progress. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6556) Refine XAttr permissions
[ https://issues.apache.org/jira/browse/HDFS-6556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047129#comment-14047129 ] Hudson commented on HDFS-6556: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1789 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1789/]) HDFS-6556. Refine XAttr permissions. Contributed by Uma Maheswara Rao G. (umamahesh: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606320) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java Refine XAttr permissions Key: HDFS-6556 URL: https://issues.apache.org/jira/browse/HDFS-6556 Project: Hadoop HDFS Issue Type: Bug Components: namenode Affects Versions: 2.5.0 Reporter: Yi Liu Assignee: Uma Maheswara Rao G Fix For: 3.0.0, 2.5.0 Attachments: RefinedPermissions-HDFS-6556-1.patch, RefinedPermissions-HDFS-6556.patch, refinedPermissions-HDFS-6556-2.patch, refinedPermissions-HDFS-6556-3.patch After discuss with Uma, we should refine setting permissions of {{user}} and {{trusted}} namespace xattrs. *1.* For {{user}} namespace xattrs, In HDFS-6374, says setXAttr should require the user to be the owner of the file or directory, we have a bit misunderstanding. It actually is: {quote} The access permissions for user attributes are defined by the file permission bits. only regular files and directories can have extended attributes. For sticky directories, only the owner and privileged user can write attributes. {quote} We can refer to linux source code in http://lxr.free-electrons.com/source/fs/xattr.c?v=2.6.35 I also check in linux, it's controlled by the file permission bits for regular files and directories (not sticky). *2.* For {{trusted}} namespace, currently we require the user should be owner and superuser. Actually superuser is enough. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6601) Issues in finalizing rolling upgrade when there is a layout version change
[ https://issues.apache.org/jira/browse/HDFS-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047138#comment-14047138 ] Hudson commented on HDFS-6601: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1816 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1816/]) HDFS-6601. Issues in finalizing rolling upgrade when there is a layout version change. Contributed by Kihwal Lee. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606371) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java Issues in finalizing rolling upgrade when there is a layout version change -- Key: HDFS-6601 URL: https://issues.apache.org/jira/browse/HDFS-6601 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Kihwal Lee Assignee: Kihwal Lee Priority: Blocker Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6601.patch After HDFS-6545, we have noticed a couple of issues. - The storage dir's VERSION file is not properly updated. This becomes a problem when there is a layout version change. We can have the finalization do {{storage.writeAll()}} - {{OP_ROLLING_UPGRADE_FINALIZE}} cannot be replayed, once the corresponding {{OP_ROLLING_UPGRADE_START}} is consumed and a new fsimage is created (e.g. rollback image). On restart, NN terminates complaining it can't finalize something that it didn't start. We can make NN ignore {{OP_ROLLING_UPGRADE_FINALIZE}} if no rolling upgrade is in progress. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6418) Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk
[ https://issues.apache.org/jira/browse/HDFS-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047201#comment-14047201 ] Hudson commented on HDFS-6418: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5799 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5799/]) HDFS-6418. Regression: DFS_NAMENODE_USER_NAME_KEY missing (stevel: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606536) * /hadoop/common/trunk * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk --- Key: HDFS-6418 URL: https://issues.apache.org/jira/browse/HDFS-6418 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 3.0.0, 2.5.0 Reporter: Steve Loughran Assignee: Tsz Wo Nicholas Sze Priority: Blocker Fix For: 2.5.0 Attachments: h6418_20140619.patch Code i have that compiles against HADOOP 2.4 doesn't build against trunk as someone took away {{DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY}} -apparently in HDFS-6181. I know the name was obsolete, but anyone who has compiled code using that reference -rather than cutting and pasting in the string- is going to find their code doesn't work. More subtly: that will lead to a link exception trying to run that code on a 2.5+ cluster. This is a regression: the old names need to go back in, even if they refer to the new names and are marked as deprecated -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6418) Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk
[ https://issues.apache.org/jira/browse/HDFS-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047562#comment-14047562 ] Hudson commented on HDFS-6418: -- FAILURE: Integrated in Hadoop-Yarn-trunk #599 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/599/]) HDFS-6418. Regression: DFS_NAMENODE_USER_NAME_KEY missing (stevel: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606536) * /hadoop/common/trunk * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk --- Key: HDFS-6418 URL: https://issues.apache.org/jira/browse/HDFS-6418 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 3.0.0, 2.5.0 Reporter: Steve Loughran Assignee: Tsz Wo Nicholas Sze Priority: Blocker Fix For: 2.5.0 Attachments: h6418_20140619.patch Code i have that compiles against HADOOP 2.4 doesn't build against trunk as someone took away {{DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY}} -apparently in HDFS-6181. I know the name was obsolete, but anyone who has compiled code using that reference -rather than cutting and pasting in the string- is going to find their code doesn't work. More subtly: that will lead to a link exception trying to run that code on a 2.5+ cluster. This is a regression: the old names need to go back in, even if they refer to the new names and are marked as deprecated -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6418) Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk
[ https://issues.apache.org/jira/browse/HDFS-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047664#comment-14047664 ] Hudson commented on HDFS-6418: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1817 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1817/]) HDFS-6418. Regression: DFS_NAMENODE_USER_NAME_KEY missing (stevel: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606536) * /hadoop/common/trunk * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk --- Key: HDFS-6418 URL: https://issues.apache.org/jira/browse/HDFS-6418 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 3.0.0, 2.5.0 Reporter: Steve Loughran Assignee: Tsz Wo Nicholas Sze Priority: Blocker Fix For: 2.5.0 Attachments: h6418_20140619.patch Code i have that compiles against HADOOP 2.4 doesn't build against trunk as someone took away {{DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY}} -apparently in HDFS-6181. I know the name was obsolete, but anyone who has compiled code using that reference -rather than cutting and pasting in the string- is going to find their code doesn't work. More subtly: that will lead to a link exception trying to run that code on a 2.5+ cluster. This is a regression: the old names need to go back in, even if they refer to the new names and are marked as deprecated -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6418) Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk
[ https://issues.apache.org/jira/browse/HDFS-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047685#comment-14047685 ] Hudson commented on HDFS-6418: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1790 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1790/]) HDFS-6418. Regression: DFS_NAMENODE_USER_NAME_KEY missing (stevel: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606536) * /hadoop/common/trunk * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java Regression: DFS_NAMENODE_USER_NAME_KEY missing in trunk --- Key: HDFS-6418 URL: https://issues.apache.org/jira/browse/HDFS-6418 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 3.0.0, 2.5.0 Reporter: Steve Loughran Assignee: Tsz Wo Nicholas Sze Priority: Blocker Fix For: 2.5.0 Attachments: h6418_20140619.patch Code i have that compiles against HADOOP 2.4 doesn't build against trunk as someone took away {{DFSConfigKeys.DFS_NAMENODE_USER_NAME_KEY}} -apparently in HDFS-6181. I know the name was obsolete, but anyone who has compiled code using that reference -rather than cutting and pasting in the string- is going to find their code doesn't work. More subtly: that will lead to a link exception trying to run that code on a 2.5+ cluster. This is a regression: the old names need to go back in, even if they refer to the new names and are marked as deprecated -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6558) Missing '\n' in the description of dfsadmin -rollingUpgrade
[ https://issues.apache.org/jira/browse/HDFS-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14047941#comment-14047941 ] Hudson commented on HDFS-6558: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5801 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5801/]) HDFS-6558. Missing newline in the description of dfsadmin -rollingUpgrade. Contributed by Chen He. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606855) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java Missing '\n' in the description of dfsadmin -rollingUpgrade --- Key: HDFS-6558 URL: https://issues.apache.org/jira/browse/HDFS-6558 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.4.0 Reporter: Akira AJISAKA Assignee: Chen He Priority: Trivial Labels: newbie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6558.patch In DFSAdmin.java, '\n' should be added at the end of the line {code} +prepare: prepare a new rolling upgrade. {code} to clean up the following help message. {code} $ hdfs dfsadmin -help rollingUpgrade -rollingUpgrade [query|prepare|finalize]: query: query the current rolling upgrade status. prepare: prepare a new rolling upgrade. finalize: finalize the current rolling upgrade. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6591) while loop is executed tens of thousands of times in Hedged Read
[ https://issues.apache.org/jira/browse/HDFS-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048157#comment-14048157 ] Hudson commented on HDFS-6591: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5802 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5802/]) HDFS-6591. while loop is executed tens of thousands of times in Hedged Read. Contributed by Liang Xie. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606927) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java while loop is executed tens of thousands of times in Hedged Read -- Key: HDFS-6591 URL: https://issues.apache.org/jira/browse/HDFS-6591 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.4.0 Reporter: LiuLei Assignee: Liang Xie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6591-v2.txt, HDFS-6591-v3.txt, HDFS-6591-v4.txt, HDFS-6591.txt, LoopTooManyTimesTestCase.patch I download hadoop-2.4.1-rc1 code from http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1/, I test the Hedged Read. I find the while loop in hedgedFetchBlockByteRange method is executed tens of thousands of times. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6558) Missing '\n' in the description of dfsadmin -rollingUpgrade
[ https://issues.apache.org/jira/browse/HDFS-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048768#comment-14048768 ] Hudson commented on HDFS-6558: -- FAILURE: Integrated in Hadoop-Yarn-trunk #600 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/600/]) HDFS-6558. Missing newline in the description of dfsadmin -rollingUpgrade. Contributed by Chen He. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606855) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java Missing '\n' in the description of dfsadmin -rollingUpgrade --- Key: HDFS-6558 URL: https://issues.apache.org/jira/browse/HDFS-6558 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.4.0 Reporter: Akira AJISAKA Assignee: Chen He Priority: Trivial Labels: newbie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6558.patch In DFSAdmin.java, '\n' should be added at the end of the line {code} +prepare: prepare a new rolling upgrade. {code} to clean up the following help message. {code} $ hdfs dfsadmin -help rollingUpgrade -rollingUpgrade [query|prepare|finalize]: query: query the current rolling upgrade status. prepare: prepare a new rolling upgrade. finalize: finalize the current rolling upgrade. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6591) while loop is executed tens of thousands of times in Hedged Read
[ https://issues.apache.org/jira/browse/HDFS-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048767#comment-14048767 ] Hudson commented on HDFS-6591: -- FAILURE: Integrated in Hadoop-Yarn-trunk #600 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/600/]) HDFS-6591. while loop is executed tens of thousands of times in Hedged Read. Contributed by Liang Xie. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606927) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java while loop is executed tens of thousands of times in Hedged Read -- Key: HDFS-6591 URL: https://issues.apache.org/jira/browse/HDFS-6591 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.4.0 Reporter: LiuLei Assignee: Liang Xie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6591-v2.txt, HDFS-6591-v3.txt, HDFS-6591-v4.txt, HDFS-6591.txt, LoopTooManyTimesTestCase.patch I download hadoop-2.4.1-rc1 code from http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1/, I test the Hedged Read. I find the while loop in hedgedFetchBlockByteRange method is executed tens of thousands of times. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6558) Missing '\n' in the description of dfsadmin -rollingUpgrade
[ https://issues.apache.org/jira/browse/HDFS-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048860#comment-14048860 ] Hudson commented on HDFS-6558: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1791 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1791/]) HDFS-6558. Missing newline in the description of dfsadmin -rollingUpgrade. Contributed by Chen He. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606855) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java Missing '\n' in the description of dfsadmin -rollingUpgrade --- Key: HDFS-6558 URL: https://issues.apache.org/jira/browse/HDFS-6558 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.4.0 Reporter: Akira AJISAKA Assignee: Chen He Priority: Trivial Labels: newbie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6558.patch In DFSAdmin.java, '\n' should be added at the end of the line {code} +prepare: prepare a new rolling upgrade. {code} to clean up the following help message. {code} $ hdfs dfsadmin -help rollingUpgrade -rollingUpgrade [query|prepare|finalize]: query: query the current rolling upgrade status. prepare: prepare a new rolling upgrade. finalize: finalize the current rolling upgrade. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6591) while loop is executed tens of thousands of times in Hedged Read
[ https://issues.apache.org/jira/browse/HDFS-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048859#comment-14048859 ] Hudson commented on HDFS-6591: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1791 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1791/]) HDFS-6591. while loop is executed tens of thousands of times in Hedged Read. Contributed by Liang Xie. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606927) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java while loop is executed tens of thousands of times in Hedged Read -- Key: HDFS-6591 URL: https://issues.apache.org/jira/browse/HDFS-6591 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.4.0 Reporter: LiuLei Assignee: Liang Xie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6591-v2.txt, HDFS-6591-v3.txt, HDFS-6591-v4.txt, HDFS-6591.txt, LoopTooManyTimesTestCase.patch I download hadoop-2.4.1-rc1 code from http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1/, I test the Hedged Read. I find the while loop in hedgedFetchBlockByteRange method is executed tens of thousands of times. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6558) Missing '\n' in the description of dfsadmin -rollingUpgrade
[ https://issues.apache.org/jira/browse/HDFS-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048954#comment-14048954 ] Hudson commented on HDFS-6558: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1818 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1818/]) HDFS-6558. Missing newline in the description of dfsadmin -rollingUpgrade. Contributed by Chen He. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606855) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java Missing '\n' in the description of dfsadmin -rollingUpgrade --- Key: HDFS-6558 URL: https://issues.apache.org/jira/browse/HDFS-6558 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 2.4.0 Reporter: Akira AJISAKA Assignee: Chen He Priority: Trivial Labels: newbie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6558.patch In DFSAdmin.java, '\n' should be added at the end of the line {code} +prepare: prepare a new rolling upgrade. {code} to clean up the following help message. {code} $ hdfs dfsadmin -help rollingUpgrade -rollingUpgrade [query|prepare|finalize]: query: query the current rolling upgrade status. prepare: prepare a new rolling upgrade. finalize: finalize the current rolling upgrade. {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6591) while loop is executed tens of thousands of times in Hedged Read
[ https://issues.apache.org/jira/browse/HDFS-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14048953#comment-14048953 ] Hudson commented on HDFS-6591: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1818 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1818/]) HDFS-6591. while loop is executed tens of thousands of times in Hedged Read. Contributed by Liang Xie. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1606927) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java while loop is executed tens of thousands of times in Hedged Read -- Key: HDFS-6591 URL: https://issues.apache.org/jira/browse/HDFS-6591 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.4.0 Reporter: LiuLei Assignee: Liang Xie Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6591-v2.txt, HDFS-6591-v3.txt, HDFS-6591-v4.txt, HDFS-6591.txt, LoopTooManyTimesTestCase.patch I download hadoop-2.4.1-rc1 code from http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1/, I test the Hedged Read. I find the while loop in hedgedFetchBlockByteRange method is executed tens of thousands of times. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6603) Add XAttr with ACL test
[ https://issues.apache.org/jira/browse/HDFS-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049585#comment-14049585 ] Hudson commented on HDFS-6603: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5807 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5807/]) HDFS-6603. Add XAttr with ACL test. Contributed by Stephen Chu. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607239) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithXAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestXAttrWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java Add XAttr with ACL test --- Key: HDFS-6603 URL: https://issues.apache.org/jira/browse/HDFS-6603 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 3.0.0, 2.5.0 Reporter: Stephen Chu Assignee: Stephen Chu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6603.001.patch, HDFS-6603.002.patch We should verify that the XAttr operations adhere to extended ACL permissions. In this JIRA we will add a test for this once the XAttr permissions have been refined (HDFS-6556). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6603) Add XAttr with ACL test
[ https://issues.apache.org/jira/browse/HDFS-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049832#comment-14049832 ] Hudson commented on HDFS-6603: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #601 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/601/]) HDFS-6603. Add XAttr with ACL test. Contributed by Stephen Chu. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607239) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithXAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestXAttrWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java Add XAttr with ACL test --- Key: HDFS-6603 URL: https://issues.apache.org/jira/browse/HDFS-6603 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 3.0.0, 2.5.0 Reporter: Stephen Chu Assignee: Stephen Chu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6603.001.patch, HDFS-6603.002.patch We should verify that the XAttr operations adhere to extended ACL permissions. In this JIRA we will add a test for this once the XAttr permissions have been refined (HDFS-6556). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6603) Add XAttr with ACL test
[ https://issues.apache.org/jira/browse/HDFS-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049905#comment-14049905 ] Hudson commented on HDFS-6603: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1819 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1819/]) HDFS-6603. Add XAttr with ACL test. Contributed by Stephen Chu. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607239) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithXAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestXAttrWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java Add XAttr with ACL test --- Key: HDFS-6603 URL: https://issues.apache.org/jira/browse/HDFS-6603 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 3.0.0, 2.5.0 Reporter: Stephen Chu Assignee: Stephen Chu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6603.001.patch, HDFS-6603.002.patch We should verify that the XAttr operations adhere to extended ACL permissions. In this JIRA we will add a test for this once the XAttr permissions have been refined (HDFS-6556). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6603) Add XAttr with ACL test
[ https://issues.apache.org/jira/browse/HDFS-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14049973#comment-14049973 ] Hudson commented on HDFS-6603: -- SUCCESS: Integrated in Hadoop-Hdfs-trunk #1792 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1792/]) HDFS-6603. Add XAttr with ACL test. Contributed by Stephen Chu. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607239) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithXAttr.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestXAttrWithSnapshot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java Add XAttr with ACL test --- Key: HDFS-6603 URL: https://issues.apache.org/jira/browse/HDFS-6603 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 3.0.0, 2.5.0 Reporter: Stephen Chu Assignee: Stephen Chu Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6603.001.patch, HDFS-6603.002.patch We should verify that the XAttr operations adhere to extended ACL permissions. In this JIRA we will add a test for this once the XAttr permissions have been refined (HDFS-6556). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6612) MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID
[ https://issues.apache.org/jira/browse/HDFS-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050561#comment-14050561 ] Hudson commented on HDFS-6612: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5808 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5808/]) HDFS-6612. MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID. Contributed by Juan Yu. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607442) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID - Key: HDFS-6612 URL: https://issues.apache.org/jira/browse/HDFS-6612 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Juan Yu Assignee: Juan Yu Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6612.001.patch MiniDFSNNTopology#simpleFederatedTopology(int numNameservices) always hardcode nameservice id, doesn't honor configuration setting. this cause some unit tests that depends on nameserver config behavior wrong. We should add a MiniDFSNNTopology#simpleFederatedTopology(Configuration conf) to use the value from configuration. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6614) shorten TestPread run time with a smaller retry timeout setting
[ https://issues.apache.org/jira/browse/HDFS-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050575#comment-14050575 ] Hudson commented on HDFS-6614: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5809 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5809/]) HDFS-6614. shorten TestPread run time with a smaller retry timeout setting. Contributed by Liang Xie. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607447) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java shorten TestPread run time with a smaller retry timeout setting --- Key: HDFS-6614 URL: https://issues.apache.org/jira/browse/HDFS-6614 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 3.0.0, 2.5.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6614.txt Just notice logs like this from TestPread: DFS chooseDataNode: got # 3 IOException, will wait for 9909.622860072854 msec so i tried to set a smaller retry window value. Before patch: T E S T S --- Running org.apache.hadoop.hdfs.TestPread Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.812 sec - in org.apache.hadoop.hdfs.TestPread After the change: T E S T S --- Running org.apache.hadoop.hdfs.TestPread Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 131.724 sec - in org.apache.hadoop.hdfs.TestPread -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6604) The short-circuit cache doesn't correctly time out replicas that haven't been used in a while
[ https://issues.apache.org/jira/browse/HDFS-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050638#comment-14050638 ] Hudson commented on HDFS-6604: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5810 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5810/]) HDFS-6604. The short-circuit cache doesn't correctly time out replicas that haven't been used in a while (cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607456) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java The short-circuit cache doesn't correctly time out replicas that haven't been used in a while - Key: HDFS-6604 URL: https://issues.apache.org/jira/browse/HDFS-6604 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.4.0 Environment: Centos 6.5 and distribution Hortonworks Data Platform v2.1 Reporter: Giuseppe Reina Assignee: Colin Patrick McCabe Priority: Critical Fix For: 2.5.0 Attachments: HDFS-6604.001.patch, HDFS-6604.002.patch When HDFS shortcircuit is enabled, the file descriptors of the deleted HDFS blocks are kept open until the cache is full. This prevents the operating system to free the space on disk. More details on the [mailing list thread|http://mail-archives.apache.org/mod_mbox/hbase-user/201406.mbox/%3CCAPjB-CA3RV=slhuhwue5cv3pc4+rffz10-tkydbfs9rt2de...@mail.gmail.com%3E] -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6610) TestShortCircuitLocalRead tests sometimes timeout on slow machines
[ https://issues.apache.org/jira/browse/HDFS-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050861#comment-14050861 ] Hudson commented on HDFS-6610: -- SUCCESS: Integrated in Hadoop-trunk-Commit #5814 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5814/]) HDFS-6610. TestShortCircuitLocalRead tests sometimes timeout on slow machines. Contributed by Charles Lamb. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607496) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java TestShortCircuitLocalRead tests sometimes timeout on slow machines -- Key: HDFS-6610 URL: https://issues.apache.org/jira/browse/HDFS-6610 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.1 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6610.001.patch Some of the tests in TestShortCircuitLocalRead sometimes timeout on slow machines. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6614) shorten TestPread run time with a smaller retry timeout setting
[ https://issues.apache.org/jira/browse/HDFS-6614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051340#comment-14051340 ] Hudson commented on HDFS-6614: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #602 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/602/]) HDFS-6614. shorten TestPread run time with a smaller retry timeout setting. Contributed by Liang Xie. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607447) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java shorten TestPread run time with a smaller retry timeout setting --- Key: HDFS-6614 URL: https://issues.apache.org/jira/browse/HDFS-6614 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 3.0.0, 2.5.0 Reporter: Liang Xie Assignee: Liang Xie Priority: Minor Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6614.txt Just notice logs like this from TestPread: DFS chooseDataNode: got # 3 IOException, will wait for 9909.622860072854 msec so i tried to set a smaller retry window value. Before patch: T E S T S --- Running org.apache.hadoop.hdfs.TestPread Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.812 sec - in org.apache.hadoop.hdfs.TestPread After the change: T E S T S --- Running org.apache.hadoop.hdfs.TestPread Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 131.724 sec - in org.apache.hadoop.hdfs.TestPread -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6604) The short-circuit cache doesn't correctly time out replicas that haven't been used in a while
[ https://issues.apache.org/jira/browse/HDFS-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051342#comment-14051342 ] Hudson commented on HDFS-6604: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #602 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/602/]) HDFS-6604. The short-circuit cache doesn't correctly time out replicas that haven't been used in a while (cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607456) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java The short-circuit cache doesn't correctly time out replicas that haven't been used in a while - Key: HDFS-6604 URL: https://issues.apache.org/jira/browse/HDFS-6604 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.4.0 Environment: Centos 6.5 and distribution Hortonworks Data Platform v2.1 Reporter: Giuseppe Reina Assignee: Colin Patrick McCabe Priority: Critical Fix For: 2.5.0 Attachments: HDFS-6604.001.patch, HDFS-6604.002.patch When HDFS shortcircuit is enabled, the file descriptors of the deleted HDFS blocks are kept open until the cache is full. This prevents the operating system to free the space on disk. More details on the [mailing list thread|http://mail-archives.apache.org/mod_mbox/hbase-user/201406.mbox/%3CCAPjB-CA3RV=slhuhwue5cv3pc4+rffz10-tkydbfs9rt2de...@mail.gmail.com%3E] -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6612) MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID
[ https://issues.apache.org/jira/browse/HDFS-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051338#comment-14051338 ] Hudson commented on HDFS-6612: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #602 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/602/]) HDFS-6612. MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID. Contributed by Juan Yu. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607442) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID - Key: HDFS-6612 URL: https://issues.apache.org/jira/browse/HDFS-6612 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Juan Yu Assignee: Juan Yu Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6612.001.patch MiniDFSNNTopology#simpleFederatedTopology(int numNameservices) always hardcode nameservice id, doesn't honor configuration setting. this cause some unit tests that depends on nameserver config behavior wrong. We should add a MiniDFSNNTopology#simpleFederatedTopology(Configuration conf) to use the value from configuration. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6610) TestShortCircuitLocalRead tests sometimes timeout on slow machines
[ https://issues.apache.org/jira/browse/HDFS-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051336#comment-14051336 ] Hudson commented on HDFS-6610: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #602 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/602/]) HDFS-6610. TestShortCircuitLocalRead tests sometimes timeout on slow machines. Contributed by Charles Lamb. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607496) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java TestShortCircuitLocalRead tests sometimes timeout on slow machines -- Key: HDFS-6610 URL: https://issues.apache.org/jira/browse/HDFS-6610 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.1 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6610.001.patch Some of the tests in TestShortCircuitLocalRead sometimes timeout on slow machines. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6610) TestShortCircuitLocalRead tests sometimes timeout on slow machines
[ https://issues.apache.org/jira/browse/HDFS-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051469#comment-14051469 ] Hudson commented on HDFS-6610: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1793 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1793/]) HDFS-6610. TestShortCircuitLocalRead tests sometimes timeout on slow machines. Contributed by Charles Lamb. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607496) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitLocalRead.java TestShortCircuitLocalRead tests sometimes timeout on slow machines -- Key: HDFS-6610 URL: https://issues.apache.org/jira/browse/HDFS-6610 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.4.1 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6610.001.patch Some of the tests in TestShortCircuitLocalRead sometimes timeout on slow machines. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6612) MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID
[ https://issues.apache.org/jira/browse/HDFS-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051471#comment-14051471 ] Hudson commented on HDFS-6612: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1793 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1793/]) HDFS-6612. MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID. Contributed by Juan Yu. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607442) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java MiniDFSNNTopology#simpleFederatedTopology(int) always hardcode nameservice ID - Key: HDFS-6612 URL: https://issues.apache.org/jira/browse/HDFS-6612 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.5.0 Reporter: Juan Yu Assignee: Juan Yu Priority: Minor Fix For: 2.5.0 Attachments: HDFS-6612.001.patch MiniDFSNNTopology#simpleFederatedTopology(int numNameservices) always hardcode nameservice id, doesn't honor configuration setting. this cause some unit tests that depends on nameserver config behavior wrong. We should add a MiniDFSNNTopology#simpleFederatedTopology(Configuration conf) to use the value from configuration. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6604) The short-circuit cache doesn't correctly time out replicas that haven't been used in a while
[ https://issues.apache.org/jira/browse/HDFS-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14051475#comment-14051475 ] Hudson commented on HDFS-6604: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1793 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1793/]) HDFS-6604. The short-circuit cache doesn't correctly time out replicas that haven't been used in a while (cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1607456) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/shortcircuit/TestShortCircuitCache.java The short-circuit cache doesn't correctly time out replicas that haven't been used in a while - Key: HDFS-6604 URL: https://issues.apache.org/jira/browse/HDFS-6604 Project: Hadoop HDFS Issue Type: Bug Components: hdfs-client Affects Versions: 2.4.0 Environment: Centos 6.5 and distribution Hortonworks Data Platform v2.1 Reporter: Giuseppe Reina Assignee: Colin Patrick McCabe Priority: Critical Fix For: 2.5.0 Attachments: HDFS-6604.001.patch, HDFS-6604.002.patch When HDFS shortcircuit is enabled, the file descriptors of the deleted HDFS blocks are kept open until the cache is full. This prevents the operating system to free the space on disk. More details on the [mailing list thread|http://mail-archives.apache.org/mod_mbox/hbase-user/201406.mbox/%3CCAPjB-CA3RV=slhuhwue5cv3pc4+rffz10-tkydbfs9rt2de...@mail.gmail.com%3E] -- This message was sent by Atlassian JIRA (v6.2#6252)