[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato updated HADOOP-11742: --- Attachment: HADOOP-11742-branch-2.7.003-2.patch This is the patch to fix the unit test, _AbstractContractRootDirectoryTest_. Changes are: # setup() prepares an empty directory # assertion was added to make sure the root dir is empty in testRmEmptyRootDirNonRecursive() # teardown() does nothing > mkdir by file system shell fails on an empty bucket > --- > > Key: HADOOP-11742 > URL: https://issues.apache.org/jira/browse/HADOOP-11742 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0 > Environment: CentOS 7 >Reporter: Takenori Sato >Assignee: Takenori Sato >Priority: Minor > Attachments: HADOOP-11742-branch-2.7.001.patch, > HADOOP-11742-branch-2.7.002.patch, HADOOP-11742-branch-2.7.003-1.patch, > HADOOP-11742-branch-2.7.003-2.patch > > > I have built the latest 2.7, and tried S3AFileSystem. > Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as > follows: > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo > 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for > s3a://s3a/foo (foo) > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > mkdir: `s3a://s3a/foo': No such file or directory > {code} > So does _ls_. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > ls: `s3a://s3a/': No such file or directory > {code} > This is how it works via s3n. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > Found 1 items > drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo > {code} > The snapshot is the following: > {quote} > \# git branch > \* branch-2.7 > trunk > \# git log > commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 > Author: Harsh J > Date: Sun Mar 22 10:18:32 2015 +0530 > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato updated HADOOP-11742: --- Attachment: HADOOP-11742-branch-2.7.003-1.patch This is the patch to fix _S3AFileSystem#getFileStatus_. The dedicated part to process a root directory was added, which is entered only when key.isEmpty() == true. > mkdir by file system shell fails on an empty bucket > --- > > Key: HADOOP-11742 > URL: https://issues.apache.org/jira/browse/HADOOP-11742 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0 > Environment: CentOS 7 >Reporter: Takenori Sato >Assignee: Takenori Sato >Priority: Minor > Attachments: HADOOP-11742-branch-2.7.001.patch, > HADOOP-11742-branch-2.7.002.patch, HADOOP-11742-branch-2.7.003-1.patch > > > I have built the latest 2.7, and tried S3AFileSystem. > Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as > follows: > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo > 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for > s3a://s3a/foo (foo) > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > mkdir: `s3a://s3a/foo': No such file or directory > {code} > So does _ls_. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > ls: `s3a://s3a/': No such file or directory > {code} > This is how it works via s3n. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > Found 1 items > drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo > {code} > The snapshot is the following: > {quote} > \# git branch > \* branch-2.7 > trunk > \# git log > commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 > Author: Harsh J > Date: Sun Mar 22 10:18:32 2015 +0530 > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato updated HADOOP-11742: --- Resolution: Fixed Assignee: Takenori Sato Status: Resolved (was: Patch Available) > mkdir by file system shell fails on an empty bucket > --- > > Key: HADOOP-11742 > URL: https://issues.apache.org/jira/browse/HADOOP-11742 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0 > Environment: CentOS 7 >Reporter: Takenori Sato >Assignee: Takenori Sato >Priority: Minor > Attachments: HADOOP-11742-branch-2.7.001.patch, > HADOOP-11742-branch-2.7.002.patch > > > I have built the latest 2.7, and tried S3AFileSystem. > Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as > follows: > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo > 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for > s3a://s3a/foo (foo) > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > mkdir: `s3a://s3a/foo': No such file or directory > {code} > So does _ls_. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > ls: `s3a://s3a/': No such file or directory > {code} > This is how it works via s3n. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > Found 1 items > drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo > {code} > The snapshot is the following: > {quote} > \# git branch > \* branch-2.7 > trunk > \# git log > commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 > Author: Harsh J > Date: Sun Mar 22 10:18:32 2015 +0530 > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-11742: Priority: Minor (was: Major) Affects Version/s: 2.7.0 > mkdir by file system shell fails on an empty bucket > --- > > Key: HADOOP-11742 > URL: https://issues.apache.org/jira/browse/HADOOP-11742 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0 > Environment: CentOS 7 >Reporter: Takenori Sato >Priority: Minor > Attachments: HADOOP-11742-branch-2.7.001.patch, > HADOOP-11742-branch-2.7.002.patch > > > I have built the latest 2.7, and tried S3AFileSystem. > Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as > follows: > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo > 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for > s3a://s3a/foo (foo) > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > mkdir: `s3a://s3a/foo': No such file or directory > {code} > So does _ls_. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > ls: `s3a://s3a/': No such file or directory > {code} > This is how it works via s3n. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > Found 1 items > drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo > {code} > The snapshot is the following: > {quote} > \# git branch > \* branch-2.7 > trunk > \# git log > commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 > Author: Harsh J > Date: Sun Mar 22 10:18:32 2015 +0530 > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato updated HADOOP-11742: --- Attachment: HADOOP-11742-branch-2.7.002.patch I found _AbstractFSContractTestBase#setup_ always creates a test directory, which is removed at _teardown_. Thus, an empty directory was not tested in concrete test cases. The problem here is not calling mkdir on an empty bucket. But when you call _S3AFileSystem#getFileStatus("/")_ on an empty bucket, it throws an exception. To setup such a condition, I rather chose to remove the test directory at setup, then no-op at teardown. Then, without this fix, TestS3AContractRootDir failed as follows. {code} --- T E S T S --- Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 8.027 sec <<< FAILURE! - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir) Time elapsed: 2.82 sec <<< ERROR! java.io.FileNotFoundException: No such file or directory: / at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:995) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77) at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) testRmRootRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir) Time elapsed: 0.475 sec <<< ERROR! java.io.FileNotFoundException: No such file or directory: / at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:995) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:77) at org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory(ContractTestUtils.java:464) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmRootRecursive(AbstractContractRootDirectoryTest.java:96) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) testCreateFileOverRoot(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir) Time elapsed: 2.922 sec <<< ERROR! com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 368CF290D38711E4, AWS Error Code: MalformedXML, AWS Error Message: The XML you provided was not well-formed or did not validate against our published schema. at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798) at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232) at com.amazonaws.services.s3.AmazonS3Cl
[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-11742: Status: Patch Available (was: Open) submitting patch for jenkins -though that does not run the AWS test suite; needs someone to run it themselves > mkdir by file system shell fails on an empty bucket > --- > > Key: HADOOP-11742 > URL: https://issues.apache.org/jira/browse/HADOOP-11742 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 > Environment: CentOS 7 >Reporter: Takenori Sato > Attachments: HADOOP-11742-branch-2.7.001.patch > > > I have built the latest 2.7, and tried S3AFileSystem. > Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as > follows: > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo > 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for > s3a://s3a/foo (foo) > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > mkdir: `s3a://s3a/foo': No such file or directory > {code} > So does _ls_. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > ls: `s3a://s3a/': No such file or directory > {code} > This is how it works via s3n. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > Found 1 items > drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo > {code} > The snapshot is the following: > {quote} > \# git branch > \* branch-2.7 > trunk > \# git log > commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 > Author: Harsh J > Date: Sun Mar 22 10:18:32 2015 +0530 > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket
[ https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takenori Sato updated HADOOP-11742: --- Attachment: HADOOP-11742-branch-2.7.001.patch An empty key means a root directory instead of "Not Found". This is the same behavior as _NativeS3FileSystem#getFileStatus_. {code} # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ 15/03/25 06:28:05 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/25 06:28:05 DEBUG s3a.S3AFileSystem: List status for path: s3a://s3a/ 15/03/25 06:28:05 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/25 06:28:05 DEBUG s3a.S3AFileSystem: listStatus: doing listObjects for directory # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo 15/03/25 06:28:22 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/foo (foo) 15/03/25 06:28:23 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo 15/03/25 06:28:23 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/25 06:28:23 DEBUG s3a.S3AFileSystem: Making directory: s3a://s3a/foo 15/03/25 06:28:23 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/foo (foo) 15/03/25 06:28:23 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo 15/03/25 06:28:23 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/foo (foo) 15/03/25 06:28:24 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo 15/03/25 06:28:24 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ 15/03/25 06:28:31 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/25 06:28:31 DEBUG s3a.S3AFileSystem: List status for path: s3a://s3a/ 15/03/25 06:28:31 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ () 15/03/25 06:28:31 DEBUG s3a.S3AFileSystem: listStatus: doing listObjects for directory 15/03/25 06:28:31 DEBUG s3a.S3AFileSystem: Adding: rd: s3a://s3a/foo Found 1 items drwxrwxrwx - 0 1970-01-01 00:00 s3a://s3a/foo {code} > mkdir by file system shell fails on an empty bucket > --- > > Key: HADOOP-11742 > URL: https://issues.apache.org/jira/browse/HADOOP-11742 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 > Environment: CentOS 7 >Reporter: Takenori Sato > Attachments: HADOOP-11742-branch-2.7.001.patch > > > I have built the latest 2.7, and tried S3AFileSystem. > Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as > follows: > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo > 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for > s3a://s3a/foo (foo) > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > mkdir: `s3a://s3a/foo': No such file or directory > {code} > So does _ls_. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/ > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ > () > 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/ > ls: `s3a://s3a/': No such file or directory > {code} > This is how it works via s3n. > {code} > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo > # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/ > Found 1 items > drwxrwxrwx - 0 1970-01-01 00:00 s3n://s3n/foo > {code} > The snapshot is the following: > {quote} > \# git branch > \* branch-2.7 > trunk > \# git log > commit 929b04ce3a4fe419dece49ed68d4f6228be214c1 > Author: Harsh J > Date: Sun Mar 22 10:18:32 2015 +0530 > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)