[ 
https://issues.apache.org/jira/browse/HADOOP-16420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16882345#comment-16882345
 ] 

Steve Loughran commented on HADOOP-16420:
-----------------------------------------

And a debug log of a stat command. Strange

Also surfaces on an attempt to rm -r it, either on an explicit path or some 
wildcard 
{code}
bin/hadoop fs -rm -R s3a://hwdev-steve-ireland-new/fork-\*001
{code}

Trivial: the hadoop fs command doesn't itself print an error, that just appears 
in the debug logs. Looks like the FsShell doesn't print errors there, even 
though stat will. Return code is 1 though

> S3A returns 400 "bad request" on a single path within an S3 bucket
> ------------------------------------------------------------------
>
>                 Key: HADOOP-16420
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16420
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>            Reporter: Steve Loughran
>            Priority: Minor
>         Attachments: out.txt
>
>
> Filing this as "who knows?"; surfaced during testing. Notable that the 
> previous testing was playing with SSE-C, if that makes a difference: it could 
> be that there's a marker entry encrypted with SSE-C that is now being 
> rejected by a different run.
> Somehow, with a set of credentials I can work with all paths in a directory, 
> except read the dir marker /fork-0001/"; try that and a 400 bad request comes 
> back. AWS console views the path as an empty dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to