[
https://issues.apache.org/jira/browse/HADOOP-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16676774#comment-16676774
]
Adam Antal commented on HADOOP-15573:
-------------------------------------
It looks to me that this issue has been fixed in HADOOP-15583 as this piece of
code got added to {{S3AUtils.translateDynamoDBException}}:
{code:java}
final int statusCode = ddbException.getStatusCode();
final String errorCode = ddbException.getErrorCode();
IOException result = null;
// 400 gets used a lot by DDB
if (statusCode == 400) {
switch (errorCode) {
case "AccessDeniedException":
result = (IOException) new AccessDeniedException(
path,
null,
ddbException.toString())
.initCause(ddbException);
break;
default:
result = new AWSBadRequestException(message, ddbException);
}
}
{code}
> s3guard set-capacity to not retry on an access denied exception
> ---------------------------------------------------------------
>
> Key: HADOOP-15573
> URL: https://issues.apache.org/jira/browse/HADOOP-15573
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Steve Loughran
> Priority: Minor
>
> when you call {{hadoop s3guard set-capacity}} with restricted access, you are
> (correctly) blocked by AWS, but the client keeps retrying. It should fail
> fast on a 400/AccessDenied
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]