[
https://issues.apache.org/jira/browse/JCLOUDS-100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13687257#comment-13687257
]
Andrew Bayer commented on JCLOUDS-100:
--------------------------------------
Note that if you hit RequestLimitExceeded errors, you may well need to change
the jclouds.max-retries and jclouds.retries-delay-start properties to greater
values than their default 5 retries and 50ms. In my tests, that was an
insufficient delay (at least when you have enough things going on concurrently)
for the AWS API server to reset the request limit before the next one hits, but
I don't want to hardcode AWS at a different rate than everything else. The
increased backoff time mentioned in the commit is actually an increased
*maximum* time, so that you can bump up the initial start delay and max retries
to reach a longer delay than otherwise would be allowed - 100 times the
original start delay, rather than 10 times.
> Better handling of API rate limits
> ----------------------------------
>
> Key: JCLOUDS-100
> URL: https://issues.apache.org/jira/browse/JCLOUDS-100
> Project: jclouds
> Issue Type: Bug
> Affects Versions: 1.5.10, 1.6.0, 1.6.1
> Reporter: Omar Alrubaiyan
> Assignee: Andrew Bayer
> Priority: Critical
> Fix For: 1.7.0, 1.6.2
>
>
> Right now it seems that there is some retry logic when API operations fail.
> However, the retry logic does not handle hitting the API rate limit very
> well. The problem is that the retries are done successively instead of
> exponential back-off retries which is the recommended way of dealing with EC2
> API limits.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira