[
https://issues.apache.org/jira/browse/HADOOP-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331724#comment-15331724
]
Steve Loughran commented on HADOOP-13276:
-----------------------------------------
final output is original exception
{code}
ls: : getFileStatus on : com.amazonaws.services.s3.model.AmazonS3Exception: The
request signature we calculated does not match the signature you provided.
Check your key and signing method. (Service: Amazon S3; Status Code: 403; Error
Code: SignatureDoesNotMatch; Request ID: 756C67505DF05C0F), S3 Extended Request
ID: ZMzPOdq8K1FeTDtSKVU0p+FotFU+EmCvnko8tH5n00hCj71ZUq/5ffn0NP7LWz7WZI1tVsDnFos=
{code}
Looks like the AWS retry policy considers an auth failure as retryable, and so
does a repeated retry with exponential backoff. This is probably not the right
strategy, unless there can be transient signing/signature validation problems
> S3a operations keep retrying if the password is wrong
> -----------------------------------------------------
>
> Key: HADOOP-13276
> URL: https://issues.apache.org/jira/browse/HADOOP-13276
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Steve Loughran
> Priority: Minor
>
> If you do a {{hadoop fs}} command with the AWS account valid but the password
> wrong, it takes a while to timeout, because of retries happening underneath.
> Eventually it gives up, but failing fast would be better.
> # maybe: check the password length and fail if it is not the right length (is
> there a standard one? Or at least a range?)
> # consider a retry policy which fails faster on signature failures/403
> responses
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]