[jira] [Assigned] (HADOOP-11767) Genefic token API and representation
[ https://issues.apache.org/jira/browse/HADOOP-11767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiajia Li reassigned HADOOP-11767: -- Assignee: Jiajia Li (was: Kai Zheng) Genefic token API and representation Key: HADOOP-11767 URL: https://issues.apache.org/jira/browse/HADOOP-11767 Project: Hadoop Common Issue Type: Sub-task Components: security Reporter: Kai Zheng Assignee: Jiajia Li This will abstract common token aspects and defines a generic token interface and representation, named {{AuthToken}}. A JWT token implementation of such API will be provided separately in another issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11768) A JWT token implementation of AuthToken API
[ https://issues.apache.org/jira/browse/HADOOP-11768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386133#comment-14386133 ] Jiajia Li commented on HADOOP-11768: I've discussed with [~drankye] offline and would like to take this jra. A JWT token implementation of AuthToken API --- Key: HADOOP-11768 URL: https://issues.apache.org/jira/browse/HADOOP-11768 Project: Hadoop Common Issue Type: Sub-task Components: security Reporter: Kai Zheng Assignee: Kai Zheng This is to provide a JWT token implementation of {{AuthToken}} API utilizing some 3rd Java library for the support. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-11768) A JWT token implementation of AuthToken API
[ https://issues.apache.org/jira/browse/HADOOP-11768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiajia Li reassigned HADOOP-11768: -- Assignee: Jiajia Li (was: Kai Zheng) A JWT token implementation of AuthToken API --- Key: HADOOP-11768 URL: https://issues.apache.org/jira/browse/HADOOP-11768 Project: Hadoop Common Issue Type: Sub-task Components: security Reporter: Kai Zheng Assignee: Jiajia Li This is to provide a JWT token implementation of {{AuthToken}} API utilizing some 3rd Java library for the support. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-11769) Pluggable token encoder, decoder and validator
[ https://issues.apache.org/jira/browse/HADOOP-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiajia Li reassigned HADOOP-11769: -- Assignee: Jiajia Li (was: Kai Zheng) Pluggable token encoder, decoder and validator -- Key: HADOOP-11769 URL: https://issues.apache.org/jira/browse/HADOOP-11769 Project: Hadoop Common Issue Type: Sub-task Components: security Reporter: Kai Zheng Assignee: Jiajia Li This is to define a common token encoder, decoder and validator interfaces, considering token serialization and deserialization, encryption and decryption, signing and verifying, expiration and audience checking, and etc. By such APIs pluggable and configurable token encoder, decoder and validator will be implemented in other issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11767) Genefic token API and representation
[ https://issues.apache.org/jira/browse/HADOOP-11767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386131#comment-14386131 ] Jiajia Li commented on HADOOP-11767: I've discussed with [~drankye] offline and would like to take this jra. Genefic token API and representation Key: HADOOP-11767 URL: https://issues.apache.org/jira/browse/HADOOP-11767 Project: Hadoop Common Issue Type: Sub-task Components: security Reporter: Kai Zheng Assignee: Kai Zheng This will abstract common token aspects and defines a generic token interface and representation, named {{AuthToken}}. A JWT token implementation of such API will be provided separately in another issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11769) Pluggable token encoder, decoder and validator
[ https://issues.apache.org/jira/browse/HADOOP-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386134#comment-14386134 ] Jiajia Li commented on HADOOP-11769: I've discussed with [~drankye] offline and would like to take this jira. Pluggable token encoder, decoder and validator -- Key: HADOOP-11769 URL: https://issues.apache.org/jira/browse/HADOOP-11769 Project: Hadoop Common Issue Type: Sub-task Components: security Reporter: Kai Zheng Assignee: Kai Zheng This is to define a common token encoder, decoder and validator interfaces, considering token serialization and deserialization, encryption and decryption, signing and verifying, expiration and audience checking, and etc. By such APIs pluggable and configurable token encoder, decoder and validator will be implemented in other issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths
[ https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-8437: - Attachment: HADOOP-8437-003.patch getLocalPathForWrite is not throwing any expection for invalid paths Key: HADOOP-8437 URL: https://issues.apache.org/jira/browse/HADOOP-8437 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Attachments: HADOOP-8437-003.patch, HADOOP-8437.patch, HADOOP-8437_1.patch, HADOOP-8437_2.patch call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf ); Here it will not thrown any exception but earlier version it used throw. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths
[ https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-8437: - Status: Patch Available (was: Open) getLocalPathForWrite is not throwing any expection for invalid paths Key: HADOOP-8437 URL: https://issues.apache.org/jira/browse/HADOOP-8437 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Attachments: HADOOP-8437-003.patch, HADOOP-8437.patch, HADOOP-8437_1.patch, HADOOP-8437_2.patch call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf ); Here it will not thrown any exception but earlier version it used throw. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths
[ https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386111#comment-14386111 ] Brahma Reddy Battula commented on HADOOP-8437: -- [~qwertymaniac] Kindly review the attached patch getLocalPathForWrite is not throwing any expection for invalid paths Key: HADOOP-8437 URL: https://issues.apache.org/jira/browse/HADOOP-8437 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 3.0.0, 2.6.0 Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Attachments: HADOOP-8437-003.patch, HADOOP-8437.patch, HADOOP-8437_1.patch, HADOOP-8437_2.patch call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf ); Here it will not thrown any exception but earlier version it used throw. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths
[ https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula updated HADOOP-8437: - Affects Version/s: 3.0.0 2.6.0 getLocalPathForWrite is not throwing any expection for invalid paths Key: HADOOP-8437 URL: https://issues.apache.org/jira/browse/HADOOP-8437 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 3.0.0, 2.6.0 Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Attachments: HADOOP-8437-003.patch, HADOOP-8437.patch, HADOOP-8437_1.patch, HADOOP-8437_2.patch call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf ); Here it will not thrown any exception but earlier version it used throw. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11774) listStatus in FTPFileSystem fails with connection reset
Krishnamoorthy Dharmalingam created HADOOP-11774: Summary: listStatus in FTPFileSystem fails with connection reset Key: HADOOP-11774 URL: https://issues.apache.org/jira/browse/HADOOP-11774 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.3.0 Environment: Remote FTP located in Windows NT FTP Reporter: Krishnamoorthy Dharmalingam Following exception trace raised when FTPFileSystem.listStatus() called in Passive/active mode. Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:196) at java.net.SocketInputStream.read(SocketInputStream.java:122) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:154) at java.io.BufferedReader.read(BufferedReader.java:175) at org.apache.commons.net.io.CRLFLineReader.readLine(CRLFLineReader.java:58) at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:310) at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:552) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:601) at org.apache.commons.net.ftp.FTP.quit(FTP.java:809) at org.apache.commons.net.ftp.FTPClient.logout(FTPClient.java:979) at org.apache.hadoop.fs.ftp.FTPFileSystem.disconnect(FTPFileSystem.java:151) at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:395) at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1424) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths
[ https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386142#comment-14386142 ] Hadoop QA commented on HADOOP-8437: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12708073/HADOOP-8437-003.patch against trunk revision 232eca9. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/6022//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/6022//console This message is automatically generated. getLocalPathForWrite is not throwing any expection for invalid paths Key: HADOOP-8437 URL: https://issues.apache.org/jira/browse/HADOOP-8437 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.0-alpha, 3.0.0, 2.6.0 Reporter: Brahma Reddy Battula Assignee: Brahma Reddy Battula Attachments: HADOOP-8437-003.patch, HADOOP-8437.patch, HADOOP-8437_1.patch, HADOOP-8437_2.patch call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf ); Here it will not thrown any exception but earlier version it used throw. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11753) TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range header
[ https://issues.apache.org/jira/browse/HADOOP-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato resolved HADOOP-11753. Resolution: Invalid TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range header --- Key: HADOOP-11753 URL: https://issues.apache.org/jira/browse/HADOOP-11753 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 3.0.0, 2.7.0 Reporter: Takenori Sato Assignee: Takenori Sato Attachments: HADOOP-11753-branch-2.7.001.patch _TestS3AContractOpen#testOpenReadZeroByteFile_ fails as follows. {code} testOpenReadZeroByteFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen) Time elapsed: 3.312 sec ERROR! com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS Service: Amazon S3, AWS Request ID: A58A95E0D36811E4, AWS Error Code: InvalidRange, AWS Error Message: The requested range cannot be satisfied. at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798) at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528) at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:) at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:91) at org.apache.hadoop.fs.s3a.S3AInputStream.openIfNeeded(S3AInputStream.java:62) at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:127) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenReadZeroByteFile(AbstractContractOpenTest.java:66) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} This is because the header is wrong when calling _S3AInputStream#read_ after _S3AInputStream#open_. {code} Range: bytes=0--1 * from 0 to -1 {code} Tested on the latest branch-2.7. {quote} $ git log commit d286673c602524af08935ea132c8afd181b6e2e4 Author: Jitendra Pandey Jitendra@Jitendra-Pandeys-MacBook-Pro-4.local Date: Tue Mar 24 16:17:06 2015 -0700 {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11753) TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range header
[ https://issues.apache.org/jira/browse/HADOOP-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386168#comment-14386168 ] Takenori Sato commented on HADOOP-11753: Thanks for the clarification. Yes, this is against Cloudian. So let me close. Will check AWS as well for further tests. TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range header --- Key: HADOOP-11753 URL: https://issues.apache.org/jira/browse/HADOOP-11753 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 3.0.0, 2.7.0 Reporter: Takenori Sato Assignee: Takenori Sato Attachments: HADOOP-11753-branch-2.7.001.patch _TestS3AContractOpen#testOpenReadZeroByteFile_ fails as follows. {code} testOpenReadZeroByteFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen) Time elapsed: 3.312 sec ERROR! com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS Service: Amazon S3, AWS Request ID: A58A95E0D36811E4, AWS Error Code: InvalidRange, AWS Error Message: The requested range cannot be satisfied. at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798) at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528) at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:) at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:91) at org.apache.hadoop.fs.s3a.S3AInputStream.openIfNeeded(S3AInputStream.java:62) at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:127) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenReadZeroByteFile(AbstractContractOpenTest.java:66) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} This is because the header is wrong when calling _S3AInputStream#read_ after _S3AInputStream#open_. {code} Range: bytes=0--1 * from 0 to -1 {code} Tested on the latest branch-2.7. {quote} $ git log commit d286673c602524af08935ea132c8afd181b6e2e4 Author: Jitendra Pandey Jitendra@Jitendra-Pandeys-MacBook-Pro-4.local Date: Tue Mar 24 16:17:06 2015 -0700 {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386172#comment-14386172 ] Takenori Sato commented on HADOOP-11742: Thomas, Steve, yes, again this is against our own. I will check the difference. Let me close. mkdir by file system shell fails on an empty bucket --- Key: HADOOP-11742 URL: https://issues.apache.org/jira/browse/HADOOP-11742 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.7.0 Environment: CentOS 7 Reporter: Takenori Sato Priority: Minor Attachments: HADOOP-11742-branch-2.7.001.patch, HADOOP-11742-branch-2.7.002.patch I have built the latest 2.7, and tried S3AFileSystem. Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as follows: {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/foo (foo) 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ mkdir: `s3a://s3a/foo': No such file or directory {code} So does _ls_. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ ls: `s3a://s3a/': No such file or directory {code} This is how it works via s3n. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ Found 1 items drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo {code} The snapshot is the following: {quote} \# git branch \* branch-2.7 trunk \# git log commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 Author: Harsh J ha...@cloudera.com Date: Sun Mar 22 10:18:32 2015 +0530 {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato updated HADOOP-11742: --- Resolution: Fixed Assignee: Takenori Sato Status: Resolved (was: Patch Available) mkdir by file system shell fails on an empty bucket --- Key: HADOOP-11742 URL: https://issues.apache.org/jira/browse/HADOOP-11742 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.7.0 Environment: CentOS 7 Reporter: Takenori Sato Assignee: Takenori Sato Priority: Minor Attachments: HADOOP-11742-branch-2.7.001.patch, HADOOP-11742-branch-2.7.002.patch I have built the latest 2.7, and tried S3AFileSystem. Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as follows: {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/foo (foo) 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ mkdir: `s3a://s3a/foo': No such file or directory {code} So does _ls_. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ ls: `s3a://s3a/': No such file or directory {code} This is how it works via s3n. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ Found 1 items drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo {code} The snapshot is the following: {quote} \# git branch \* branch-2.7 trunk \# git log commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 Author: Harsh J ha...@cloudera.com Date: Sun Mar 22 10:18:32 2015 +0530 {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11774) listStatus in FTPFileSystem fails with connection reset
[ https://issues.apache.org/jira/browse/HADOOP-11774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386223#comment-14386223 ] Krishnamoorthy Dharmalingam commented on HADOOP-11774: -- anonymous FTP call made on Windows NT listStatus in FTPFileSystem fails with connection reset --- Key: HADOOP-11774 URL: https://issues.apache.org/jira/browse/HADOOP-11774 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.3.0 Environment: Remote FTP located in Windows NT FTP Reporter: Krishnamoorthy Dharmalingam Following exception trace raised when FTPFileSystem.listStatus() called in Passive/active mode. Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:196) at java.net.SocketInputStream.read(SocketInputStream.java:122) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:154) at java.io.BufferedReader.read(BufferedReader.java:175) at org.apache.commons.net.io.CRLFLineReader.readLine(CRLFLineReader.java:58) at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:310) at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:552) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:601) at org.apache.commons.net.ftp.FTP.quit(FTP.java:809) at org.apache.commons.net.ftp.FTPClient.logout(FTPClient.java:979) at org.apache.hadoop.fs.ftp.FTPFileSystem.disconnect(FTPFileSystem.java:151) at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:395) at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1424) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato resolved HADOOP-11742. Resolution: Invalid mkdir by file system shell fails on an empty bucket --- Key: HADOOP-11742 URL: https://issues.apache.org/jira/browse/HADOOP-11742 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.7.0 Environment: CentOS 7 Reporter: Takenori Sato Assignee: Takenori Sato Priority: Minor Attachments: HADOOP-11742-branch-2.7.001.patch, HADOOP-11742-branch-2.7.002.patch I have built the latest 2.7, and tried S3AFileSystem. Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as follows: {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/foo (foo) 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ mkdir: `s3a://s3a/foo': No such file or directory {code} So does _ls_. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ ls: `s3a://s3a/': No such file or directory {code} This is how it works via s3n. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ Found 1 items drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo {code} The snapshot is the following: {quote} \# git branch \* branch-2.7 trunk \# git log commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 Author: Harsh J ha...@cloudera.com Date: Sun Mar 22 10:18:32 2015 +0530 {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato reopened HADOOP-11742: Reopen to mark this as invalid. mkdir by file system shell fails on an empty bucket --- Key: HADOOP-11742 URL: https://issues.apache.org/jira/browse/HADOOP-11742 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 2.7.0 Environment: CentOS 7 Reporter: Takenori Sato Assignee: Takenori Sato Priority: Minor Attachments: HADOOP-11742-branch-2.7.001.patch, HADOOP-11742-branch-2.7.002.patch I have built the latest 2.7, and tried S3AFileSystem. Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as follows: {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/foo (foo) 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ mkdir: `s3a://s3a/foo': No such file or directory {code} So does _ls_. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ ls: `s3a://s3a/': No such file or directory {code} This is how it works via s3n. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ Found 1 items drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo {code} The snapshot is the following: {quote} \# git branch \* branch-2.7 trunk \# git log commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 Author: Harsh J ha...@cloudera.com Date: Sun Mar 22 10:18:32 2015 +0530 {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11774) listStatus in FTPFileSystem fails with connection reset
[ https://issues.apache.org/jira/browse/HADOOP-11774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386225#comment-14386225 ] Krishnamoorthy Dharmalingam commented on HADOOP-11774: -- FTPFileSystem.disconnect() API is not handling the exception raised by FTPClient by commons-net-3.1.jar listStatus in FTPFileSystem fails with connection reset --- Key: HADOOP-11774 URL: https://issues.apache.org/jira/browse/HADOOP-11774 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.3.0 Environment: Remote FTP located in Windows NT FTP Reporter: Krishnamoorthy Dharmalingam Following exception trace raised when FTPFileSystem.listStatus() called in Passive/active mode. Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:196) at java.net.SocketInputStream.read(SocketInputStream.java:122) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:154) at java.io.BufferedReader.read(BufferedReader.java:175) at org.apache.commons.net.io.CRLFLineReader.readLine(CRLFLineReader.java:58) at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:310) at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:552) at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:601) at org.apache.commons.net.ftp.FTP.quit(FTP.java:809) at org.apache.commons.net.ftp.FTPClient.logout(FTPClient.java:979) at org.apache.hadoop.fs.ftp.FTPFileSystem.disconnect(FTPFileSystem.java:151) at org.apache.hadoop.fs.ftp.FTPFileSystem.getFileStatus(FTPFileSystem.java:395) at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1424) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9805) Refactor RawLocalFileSystem#rename for improved testability.
[ https://issues.apache.org/jira/browse/HADOOP-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385736#comment-14385736 ] Steve Loughran commented on HADOOP-9805: LGTM. Chris: what do you think? Refactor RawLocalFileSystem#rename for improved testability. Key: HADOOP-9805 URL: https://issues.apache.org/jira/browse/HADOOP-9805 Project: Hadoop Common Issue Type: Bug Components: fs, test Affects Versions: 3.0.0, 1-win, 1.3.0, 2.1.1-beta Reporter: Chris Nauroth Assignee: Jean-Pierre Matsumoto Priority: Minor Labels: newbie Attachments: HADOOP-9805.001.patch {{RawLocalFileSystem#rename}} contains fallback logic to provide POSIX rename behavior on platforms where {{java.io.File#renameTo}} fails. The method returns early if {{java.io.File#renameTo}} succeeds, so test runs may not cover the fallback logic depending on the platform. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration
[ https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386011#comment-14386011 ] Kai Zheng commented on HADOOP-11664: bq.If you agree we can keep the configurable item, I will have to change the property key I just did it in the branch, fixing the property key. Loading predefined EC schemas from configuration Key: HADOOP-11664 URL: https://issues.apache.org/jira/browse/HADOOP-11664 Project: Hadoop Common Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Fix For: HDFS-7285 Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, HDFS-7371_v1.patch System administrator can configure multiple EC codecs in hdfs-site.xml file, and codec instances or schemas in a new configuration file named ec-schema.xml in the conf folder. A codec can be referenced by its instance or schema using the codec name, and a schema can be utilized and specified by the schema name for a folder or EC ZONE to enforce EC. Once a schema is used to define an EC ZONE, then its associated parameter values will be stored as xattributes and respected thereafter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration
[ https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386012#comment-14386012 ] Kai Zheng commented on HADOOP-11664: Thanks [~zhz] for the help ! Loading predefined EC schemas from configuration Key: HADOOP-11664 URL: https://issues.apache.org/jira/browse/HADOOP-11664 Project: Hadoop Common Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng Fix For: HDFS-7285 Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, HDFS-7371_v1.patch System administrator can configure multiple EC codecs in hdfs-site.xml file, and codec instances or schemas in a new configuration file named ec-schema.xml in the conf folder. A codec can be referenced by its instance or schema using the codec name, and a schema can be utilized and specified by the schema name for a folder or EC ZONE to enforce EC. Once a schema is used to define an EC ZONE, then its associated parameter values will be stored as xattributes and respected thereafter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11740) Combine erasure encoder and decoder interfaces
[ https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386013#comment-14386013 ] Kai Zheng commented on HADOOP-11740: [~zhz], How about letting this issue just do what it tells to do ? For other considerations we can further discuss offline and then handle them separately if necessary. Combine erasure encoder and decoder interfaces -- Key: HADOOP-11740 URL: https://issues.apache.org/jira/browse/HADOOP-11740 Project: Hadoop Common Issue Type: Sub-task Components: io Reporter: Zhe Zhang Assignee: Zhe Zhang Attachments: HADOOP-11740-000.patch Rationale [discussed | https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540] under HDFS-7337. -- This message was sent by Atlassian JIRA (v6.3.4#6332)