[
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329221#comment-15329221
]
Steve Loughran commented on HADOOP-3733:
----------------------------------------
OK, I think I see the issue: you are putting the secret in the URL, not behind
the scenes (config, more recently env var).
I suspect what is happening is some parsing of the URI is getting confused
about where to split up the auth info and the URL itself.
It could be in the {{initalize()}} method, where the URI is built
{code}
uri = URI.create(name.getScheme() + "://" + name.getAuthority());
{code}
Maybe it should use {{name.getRawAuthority()}}, to skip expansion of encoded
characters. Alternatively, that authority info should be broken up and used to
set up the auth credentials. I'd prefer that as otherwise there's a risk of the
URI details being printed. Actually, that's something we should be looking for
anyway; making sure that the full URI never gets printed.
Ravi, do you want to look at this? See if using the raw auth works? If not, try
parsing that directly and using it as the credenials
> "s3:" URLs break when Secret Key contains a slash, even if encoded
> ------------------------------------------------------------------
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.17.1, 2.0.2-alpha
> Reporter: Stuart Sierra
> Priority: Minor
> Attachments: HADOOP-3733-20130223T011025Z.patch, HADOOP-3733.patch,
> hadoop-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line,
> distcp fails if the SECRET contains a slash, even when the slash is
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles:
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket:
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed.
> ResponseCode=403, ResponseMessage=Forbidden
> at
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception:
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message:
> <?xml version="1.0"
> encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The
> request signature we calculated does not match the signature you provided.
> Check your key and signing method.</Message>
> at
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]