[jira] [Reopened] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-04-01 Thread Takenori Sato (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takenori Sato reopened HADOOP-11742:


I confirmed mkdir fails on an empty bucket for AWS as follows:

1. make sure the bucket is empty, but get an exception

{code}
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -Dfs.s3a.access.key=ACCESS_KEY 
-Dfs.s3a.secret.key=SECRET_KEY -ls s3a://s3atest/
15/04/02 01:49:09 DEBUG http.wire: >> "HEAD / HTTP/1.1[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Host: s3atest.s3.amazonaws.com[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Authorization: AWS XXX=[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Date: Thu, 02 Apr 2015 01:49:08 
GMT[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "User-Agent: aws-sdk-java/1.7.4 
Linux/3.10.0-123.8.1.el7.centos.plus.x86_64 
Java_HotSpot(TM)_64-Bit_Server_VM/24.75-b04/1.7.0_75[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Content-Type: 
application/x-www-form-urlencoded; charset=utf-8[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Connection: Keep-Alive[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "HTTP/1.1 200 OK[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "x-amz-id-2: XXX[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "x-amz-request-id: XXX[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Date: Thu, 02 Apr 2015 01:49:10 
GMT[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Content-Type: application/xml[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Transfer-Encoding: chunked[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Server: AmazonS3[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "[\r][\n]"
15/04/02 01:49:09 DEBUG s3a.S3AFileSystem: Getting path status for 
s3a://s3atest/ ()
15/04/02 01:49:09 DEBUG http.wire: >> "GET /?delimiter=%2F&max-keys=1&prefix= 
HTTP/1.1[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Host: s3atest.s3.amazonaws.com[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Authorization: AWS XXX=[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Date: Thu, 02 Apr 2015 01:49:09 
GMT[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "User-Agent: aws-sdk-java/1.7.4 
Linux/3.10.0-123.8.1.el7.centos.plus.x86_64 
Java_HotSpot(TM)_64-Bit_Server_VM/24.75-b04/1.7.0_75[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Content-Type: 
application/x-www-form-urlencoded; charset=utf-8[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "Connection: Keep-Alive[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: >> "[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "HTTP/1.1 200 OK[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "x-amz-id-2: XXX[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "x-amz-request-id: XXX[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Date: Thu, 02 Apr 2015 01:49:10 
GMT[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Content-Type: application/xml[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Transfer-Encoding: chunked[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "Server: AmazonS3[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "fe[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "[\n]"
15/04/02 01:49:09 DEBUG http.wire: << "http://s3.amazonaws.com/doc/2006-03-01/";>s3atest1/false"
15/04/02 01:49:09 DEBUG http.wire: << "[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "0[\r][\n]"
15/04/02 01:49:09 DEBUG http.wire: << "[\r][\n]"
15/04/02 01:49:09 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3atest/
ls: `s3a://s3atest/': No such file or directory
{code}

2. create a directory, but get an exception

{code}
# hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -Dfs.s3a.access.key=ACCESS_KEY 
-Dfs.s3a.secret.key=SECRET_KEY -mkdir s3a://s3atest/root
15/04/02 01:49:41 DEBUG http.wire: >> "HEAD / HTTP/1.1[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: >> "Host: s3atest.s3.amazonaws.com[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: >> "Authorization: AWS XXX=[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: >> "Date: Thu, 02 Apr 2015 01:49:41 
GMT[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: >> "User-Agent: aws-sdk-java/1.7.4 
Linux/3.10.0-123.8.1.el7.centos.plus.x86_64 
Java_HotSpot(TM)_64-Bit_Server_VM/24.75-b04/1.7.0_75[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: >> "Content-Type: 
application/x-www-form-urlencoded; charset=utf-8[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: >> "Connection: Keep-Alive[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: >> "[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "HTTP/1.1 200 OK[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "x-amz-id-2: XXX[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "x-amz-request-id: XXX[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "Date: Thu, 02 Apr 2015 01:49:42 
GMT[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "Content-Type: application/xml[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "Transfer-Encoding: chunked[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "Server: AmazonS3[\r][\n]"
15/04/02 01:49:41 DEBUG http.wire: << "[\r][\n]"
15/04

[jira] [Reopened] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-03-29 Thread Takenori Sato (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takenori Sato reopened HADOOP-11742:


Reopen to mark this as invalid.

> mkdir by file system shell fails on an empty bucket
> ---
>
> Key: HADOOP-11742
> URL: https://issues.apache.org/jira/browse/HADOOP-11742
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0
> Environment: CentOS 7
>Reporter: Takenori Sato
>Assignee: Takenori Sato
>Priority: Minor
> Attachments: HADOOP-11742-branch-2.7.001.patch, 
> HADOOP-11742-branch-2.7.002.patch
>
>
> I have built the latest 2.7, and tried S3AFileSystem.
> Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
> follows:
> {code}
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
> 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
> s3a://s3a/foo (foo)
> 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
> 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
> ()
> 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
> mkdir: `s3a://s3a/foo': No such file or directory
> {code}
> So does _ls_.
> {code}
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
> 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
> ()
> 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
> ls: `s3a://s3a/': No such file or directory
> {code}
> This is how it works via s3n.
> {code}
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
> # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
> {code}
> The snapshot is the following:
> {quote}
> \# git branch
> \* branch-2.7
>   trunk
> \# git log
> commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
> Author: Harsh J 
> Date:   Sun Mar 22 10:18:32 2015 +0530
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)