[jira] [Commented] (HDFS-6189) Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon.
[ https://issues.apache.org/jira/browse/HDFS-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13960999#comment-13960999 ] Chris Nauroth commented on HDFS-6189: - [~szetszwo], thank you for reviewing and committing! Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon. -- Key: HDFS-6189 URL: https://issues.apache.org/jira/browse/HDFS-6189 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 3.0.0, 2.4.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: 2.4.1 Attachments: HDFS-6189.1.patch Some HDFS tests are attempting to use a test root path based on the test.root.dir that we've defined for use on the local file system. This doesn't work on Windows because of the drive spec, i.e. C:. HDFS rejects paths containing a colon as invalid. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6189) Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon.
[ https://issues.apache.org/jira/browse/HDFS-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961053#comment-13961053 ] Hudson commented on HDFS-6189: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #530 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/530/]) Commit the hadoop-common part of HDFS-6189. (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584767) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestHelper.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java HDFS-6189. Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon. Contributed by cnauroth (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584763) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsCreateMkdir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsPermission.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSymlinkHdfsDisable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemAtHdfsRoot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsAtHdfsRoot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsHdfs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon. -- Key: HDFS-6189 URL: https://issues.apache.org/jira/browse/HDFS-6189 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 3.0.0, 2.4.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: 2.4.1 Attachments: HDFS-6189.1.patch Some HDFS tests are attempting to use a test root path based on the test.root.dir that we've defined for use on the local file system. This doesn't work on Windows because of the drive spec, i.e. C:. HDFS rejects paths containing a colon as invalid. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6159) TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success
[ https://issues.apache.org/jira/browse/HDFS-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961054#comment-13961054 ] Hudson commented on HDFS-6159: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #530 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/530/]) HDFS-6159. TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success. Contributed by Chen He. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584900) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success -- Key: HDFS-6159 URL: https://issues.apache.org/jira/browse/HDFS-6159 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.3.0 Reporter: Chen He Assignee: Chen He Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6159-v2.patch, HDFS-6159-v2.patch, HDFS-6159.patch, logs.txt The TestBalancerWithNodeGroup.testBalancerWithNodeGroup will report negative false failure if there is(are) data block(s) losing after balancer successfuly finishes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6159) TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success
[ https://issues.apache.org/jira/browse/HDFS-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961086#comment-13961086 ] Hudson commented on HDFS-6159: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1748 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1748/]) HDFS-6159. TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success. Contributed by Chen He. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584900) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success -- Key: HDFS-6159 URL: https://issues.apache.org/jira/browse/HDFS-6159 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.3.0 Reporter: Chen He Assignee: Chen He Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6159-v2.patch, HDFS-6159-v2.patch, HDFS-6159.patch, logs.txt The TestBalancerWithNodeGroup.testBalancerWithNodeGroup will report negative false failure if there is(are) data block(s) losing after balancer successfuly finishes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6189) Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon.
[ https://issues.apache.org/jira/browse/HDFS-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961085#comment-13961085 ] Hudson commented on HDFS-6189: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #1748 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1748/]) Commit the hadoop-common part of HDFS-6189. (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584767) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestHelper.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java HDFS-6189. Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon. Contributed by cnauroth (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584763) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsCreateMkdir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsPermission.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSymlinkHdfsDisable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemAtHdfsRoot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsAtHdfsRoot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsHdfs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon. -- Key: HDFS-6189 URL: https://issues.apache.org/jira/browse/HDFS-6189 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 3.0.0, 2.4.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: 2.4.1 Attachments: HDFS-6189.1.patch Some HDFS tests are attempting to use a test root path based on the test.root.dir that we've defined for use on the local file system. This doesn't work on Windows because of the drive spec, i.e. C:. HDFS rejects paths containing a colon as invalid. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6159) TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success
[ https://issues.apache.org/jira/browse/HDFS-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961094#comment-13961094 ] Hudson commented on HDFS-6159: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1722 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1722/]) HDFS-6159. TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success. Contributed by Chen He. (kihwal: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584900) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block missing after balancer success -- Key: HDFS-6159 URL: https://issues.apache.org/jira/browse/HDFS-6159 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.3.0 Reporter: Chen He Assignee: Chen He Fix For: 3.0.0, 2.5.0 Attachments: HDFS-6159-v2.patch, HDFS-6159-v2.patch, HDFS-6159.patch, logs.txt The TestBalancerWithNodeGroup.testBalancerWithNodeGroup will report negative false failure if there is(are) data block(s) losing after balancer successfuly finishes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6189) Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon.
[ https://issues.apache.org/jira/browse/HDFS-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961093#comment-13961093 ] Hudson commented on HDFS-6189: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #1722 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1722/]) Commit the hadoop-common part of HDFS-6189. (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584767) * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestHelper.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java HDFS-6189. Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon. Contributed by cnauroth (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584763) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsCreateMkdir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsPermission.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSymlinkHdfsDisable.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemAtHdfsRoot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsAtHdfsRoot.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsHdfs.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java Multiple HDFS tests fail on Windows attempting to use a test root path containing a colon. -- Key: HDFS-6189 URL: https://issues.apache.org/jira/browse/HDFS-6189 Project: Hadoop HDFS Issue Type: Test Components: test Affects Versions: 3.0.0, 2.4.0 Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: 2.4.1 Attachments: HDFS-6189.1.patch Some HDFS tests are attempting to use a test root path based on the test.root.dir that we've defined for use on the local file system. This doesn't work on Windows because of the drive spec, i.e. C:. HDFS rejects paths containing a colon as invalid. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6143) (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths
[ https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961203#comment-13961203 ] Haohui Mai commented on HDFS-6143: -- Can you please separate the patch for webhdfs and hftp? Hftp has been deprecated thus this part does not need necessarily to be a blocker. Otherwise it becomes difficult to review. Thanks. (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths --- Key: HDFS-6143 URL: https://issues.apache.org/jira/browse/HDFS-6143 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-5477) Block manager as a service
[ https://issues.apache.org/jira/browse/HDFS-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961207#comment-13961207 ] Edward Bortnikov commented on HDFS-5477: Working on a new design doc following our fruitful discussions at Hadoop Summit - will post in about a week. Stay tuned ... Block manager as a service -- Key: HDFS-5477 URL: https://issues.apache.org/jira/browse/HDFS-5477 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Attachments: Block Manager as a Service - Implementation decisions.pdf, Proposal.pdf, Proposal.pdf, Remote BM.pdf, Standalone BM.pdf, Standalone BM.pdf, patches.tar.gz The block manager needs to evolve towards having the ability to run as a standalone service to improve NN vertical and horizontal scalability. The goal is reducing the memory footprint of the NN proper to support larger namespaces, and improve overall performance by decoupling the block manager from the namespace and its lock. Ideally, a distinct BM will be transparent to clients and DNs. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-5477) Block manager as a service
[ https://issues.apache.org/jira/browse/HDFS-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961208#comment-13961208 ] Amir Langer commented on HDFS-5477: --- I am on vacation until April 20th with no access to email. I will only be able to reply when I'm back. Thank you Block manager as a service -- Key: HDFS-5477 URL: https://issues.apache.org/jira/browse/HDFS-5477 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Attachments: Block Manager as a Service - Implementation decisions.pdf, Proposal.pdf, Proposal.pdf, Remote BM.pdf, Standalone BM.pdf, Standalone BM.pdf, patches.tar.gz The block manager needs to evolve towards having the ability to run as a standalone service to improve NN vertical and horizontal scalability. The goal is reducing the memory footprint of the NN proper to support larger namespaces, and improve overall performance by decoupling the block manager from the namespace and its lock. Ideally, a distinct BM will be transparent to clients and DNs. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HDFS-6181) Fix the wrong property names in NFS user guide
[ https://issues.apache.org/jira/browse/HDFS-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Li updated HDFS-6181: - Attachment: HDFS-6181.003.patch Fix the wrong property names in NFS user guide -- Key: HDFS-6181 URL: https://issues.apache.org/jira/browse/HDFS-6181 Project: Hadoop HDFS Issue Type: Bug Components: documentation, nfs Reporter: Brandon Li Assignee: Brandon Li Priority: Trivial Attachments: HDFS-6181.002.patch, HDFS-6181.003.patch, HDFS-6181.patch A couple property names are wrong in the NFS user guide, and should be fixed as the following: {noformat} property - namedfs.nfsgateway.keytab.file/name + namedfs.nfs.keytab.file/name value/etc/hadoop/conf/nfsserver.keytab/value !-- path to the nfs gateway keytab -- /property property - namedfs.nfsgateway.kerberos.principal/name + namedfs.nfs.kerberos.principal/name valuenfsserver/_h...@your-realm.com/value /property {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HDFS-6181) Fix the wrong property names in NFS user guide
[ https://issues.apache.org/jira/browse/HDFS-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Li updated HDFS-6181: - Attachment: (was: HDFS-6181.003.patch) Fix the wrong property names in NFS user guide -- Key: HDFS-6181 URL: https://issues.apache.org/jira/browse/HDFS-6181 Project: Hadoop HDFS Issue Type: Bug Components: documentation, nfs Reporter: Brandon Li Assignee: Brandon Li Priority: Trivial Attachments: HDFS-6181.002.patch, HDFS-6181.003.patch, HDFS-6181.patch A couple property names are wrong in the NFS user guide, and should be fixed as the following: {noformat} property - namedfs.nfsgateway.keytab.file/name + namedfs.nfs.keytab.file/name value/etc/hadoop/conf/nfsserver.keytab/value !-- path to the nfs gateway keytab -- /property property - namedfs.nfsgateway.kerberos.principal/name + namedfs.nfs.kerberos.principal/name valuenfsserver/_h...@your-realm.com/value /property {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths
Gera Shegalov created HDFS-6193: --- Summary: HftpFileSystem open should throw FileNotFoundException for non-existing paths Key: HDFS-6193 URL: https://issues.apache.org/jira/browse/HDFS-6193 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths
[ https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gera Shegalov updated HDFS-6143: Summary: WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths (was: (WebHdfs|Hftp)FileSystem open should throw FileNotFoundException for non-existing paths) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths Key: HDFS-6143 URL: https://issues.apache.org/jira/browse/HDFS-6143 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths
[ https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gera Shegalov updated HDFS-6143: Attachment: HDFS-6143.v05.patch I split the patch as requested. However, I hope that the fix for both hftp and webhdfs will be merged. It's pretty straightforward because they share the logic of {{ByteRangeInputStream}}. WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths Key: HDFS-6143 URL: https://issues.apache.org/jira/browse/HDFS-6143 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, HDFS-6143.v05.patch WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths
[ https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961306#comment-13961306 ] Hadoop QA commented on HDFS-6143: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638892/HDFS-6143.v05.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6594//console This message is automatically generated. WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths Key: HDFS-6143 URL: https://issues.apache.org/jira/browse/HDFS-6143 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, HDFS-6143.v05.patch WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6181) Fix the wrong property names in NFS user guide
[ https://issues.apache.org/jira/browse/HDFS-6181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961310#comment-13961310 ] Hadoop QA commented on HDFS-6181: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638887/HDFS-6181.003.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6593//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6593//console This message is automatically generated. Fix the wrong property names in NFS user guide -- Key: HDFS-6181 URL: https://issues.apache.org/jira/browse/HDFS-6181 Project: Hadoop HDFS Issue Type: Bug Components: documentation, nfs Reporter: Brandon Li Assignee: Brandon Li Priority: Trivial Attachments: HDFS-6181.002.patch, HDFS-6181.003.patch, HDFS-6181.patch A couple property names are wrong in the NFS user guide, and should be fixed as the following: {noformat} property - namedfs.nfsgateway.keytab.file/name + namedfs.nfs.keytab.file/name value/etc/hadoop/conf/nfsserver.keytab/value !-- path to the nfs gateway keytab -- /property property - namedfs.nfsgateway.kerberos.principal/name + namedfs.nfs.kerberos.principal/name valuenfsserver/_h...@your-realm.com/value /property {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths
[ https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gera Shegalov updated HDFS-6143: Attachment: HDFS-6143.v06.patch The v05 patch did not apply because HDFS-5570 removed TestByteRangeInputStream. Was it intentional? WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths Key: HDFS-6143 URL: https://issues.apache.org/jira/browse/HDFS-6143 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths
[ https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961334#comment-13961334 ] Hadoop QA commented on HDFS-6143: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12638895/HDFS-6143.v06.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestSafeMode {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6595//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6595//console This message is automatically generated. WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths Key: HDFS-6143 URL: https://issues.apache.org/jira/browse/HDFS-6143 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HDFS-6143) WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths
[ https://issues.apache.org/jira/browse/HDFS-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961339#comment-13961339 ] Gera Shegalov commented on HDFS-6143: - Test failure seems unrelated and reported earlier in HDFS-6160. Rerun org.apache.hadoop.hdfs.TestSafeMode on my laptop succeeded. WebHdfsFileSystem open should throw FileNotFoundException for non-existing paths Key: HDFS-6143 URL: https://issues.apache.org/jira/browse/HDFS-6143 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.3.0 Reporter: Gera Shegalov Assignee: Gera Shegalov Priority: Blocker Attachments: HDFS-6143.v01.patch, HDFS-6143.v02.patch, HDFS-6143.v03.patch, HDFS-6143.v04.patch, HDFS-6143.v04.patch, HDFS-6143.v05.patch, HDFS-6143.v06.patch WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles non-existing paths. - 'open', does not really open anything, i.e., it does not contact the server, and therefore cannot discover FileNotFound, it's deferred until next read. It's counterintuitive and not how local FS or HDFS work. In POSIX you get ENOENT on open. [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java] is an example of the code that's broken because of this. - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST instead of SC_NOT_FOUND for non-exitsing paths -- This message was sent by Atlassian JIRA (v6.2#6252)